Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: update L1 CloudFormation resource definitions #31086

Merged
merged 1 commit into from
Aug 12, 2024

Conversation

aws-cdk-automation
Copy link
Collaborator

Updates the L1 CloudFormation resource definitions with the latest changes from @aws-cdk/aws-service-spec

L1 CloudFormation resource definition changes:

├[~] service aws-acmpca
│ └ resources
│    └[~] resource AWS::ACMPCA::CertificateAuthority
│      └ types
│         └[~] type CrlConfiguration
│           └ properties
│              ├[+] CustomPath: string
│              ├[+] MaxPartitionSizeMB: integer
│              ├[+] PartitioningEnabled: boolean
│              └[+] RetainExpiredCertificates: boolean
├[~] service aws-auditmanager
│ └ resources
│    └[~] resource AWS::AuditManager::Assessment
│      └ types
│         ├[~] type AWSService
│         │ ├  - documentation: The `AWSService` property type specifies an AWS service such as Amazon S3 , AWS CloudTrail , and so on.
│         │ │  + documentation: The `AWSService` property type specifies an  such as Amazon S3 , AWS CloudTrail , and so on.
│         │ └ properties
│         │    └ ServiceName: (documentation changed)
│         └[~] type Scope
│           └ properties
│              └ AwsServices: (documentation changed)
├[~] service aws-chatbot
│ └ resources
│    └[~] resource AWS::Chatbot::SlackChannelConfiguration
│      └ properties
│         └ SlackChannelId: (documentation changed)
├[~] service aws-cloudtrail
│ └ resources
│    └[~] resource AWS::CloudTrail::Trail
│      └ types
│         └[~] type DataResource
│           ├  - documentation: You can configure the `DataResource` in an `EventSelector` to log data events for the following three resource types:
│           │  - `AWS::DynamoDB::Table`
│           │  - `AWS::Lambda::Function`
│           │  - `AWS::S3::Object`
│           │  To log data events for all other resource types including objects stored in [directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) , you must use [AdvancedEventSelectors](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) . You must also use `AdvancedEventSelectors` if you want to filter on the `eventName` field.
│           │  Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│           │  > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail. 
│           │  The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named `DOC-EXAMPLE-BUCKET1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│           │  - A user uploads an image file to `DOC-EXAMPLE-BUCKET1` .
│           │  - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│           │  - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::DOC-EXAMPLE-BUCKET1` .
│           │  - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│           │  The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│           │  - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│           │  - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│           │  - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│           │  + documentation: You can configure the `DataResource` in an `EventSelector` to log data events for the following three resource types:
│           │  - `AWS::DynamoDB::Table`
│           │  - `AWS::Lambda::Function`
│           │  - `AWS::S3::Object`
│           │  To log data events for all other resource types including objects stored in [directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) , you must use [AdvancedEventSelectors](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) . You must also use `AdvancedEventSelectors` if you want to filter on the `eventName` field.
│           │  Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│           │  > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail. 
│           │  The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named `amzn-s3-demo-bucket1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│           │  - A user uploads an image file to `amzn-s3-demo-bucket1` .
│           │  - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│           │  - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::amzn-s3-demo-bucket1` .
│           │  - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│           │  The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│           │  - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│           │  - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│           │  - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│           └ properties
│              └ Values: (documentation changed)
├[~] service aws-codecommit
│ └ resources
│    └[~] resource AWS::CodeCommit::Repository
│      └  - documentation: Creates a new, empty repository.
│         + documentation: Creates a new, empty repository.
│         > AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider)
├[~] service aws-codeconnections
│ └ resources
│    └[~] resource AWS::CodeConnections::Connection
│      └ attributes
│         └ ConnectionArn: (documentation changed)
├[~] service aws-codepipeline
│ └ resources
│    ├[~] resource AWS::CodePipeline::Pipeline
│    │ └ types
│    │    ├[+] type BeforeEntryConditions
│    │    │ ├  documentation: The conditions for making checks for entry to a stage.
│    │    │ │  name: BeforeEntryConditions
│    │    │ └ properties
│    │    │    └Conditions: Array<Condition>
│    │    ├[+] type Condition
│    │    │ ├  documentation: The condition for the stage. A condition is made up of the rules and the result for the condition.
│    │    │ │  name: Condition
│    │    │ └ properties
│    │    │    ├Result: string
│    │    │    └Rules: Array<RuleDeclaration>
│    │    ├[~] type FailureConditions
│    │    │ └ properties
│    │    │    └[+] Conditions: Array<Condition>
│    │    ├[+] type RuleDeclaration
│    │    │ ├  documentation: Represents information about the rule to be created for an associated condition. An example would be creating a new rule for an entry condition, such as a rule that checks for a test result before allowing the run to enter the deployment stage.
│    │    │ │  name: RuleDeclaration
│    │    │ └ properties
│    │    │    ├RuleTypeId: RuleTypeId
│    │    │    ├Configuration: json
│    │    │    ├InputArtifacts: Array<InputArtifact>
│    │    │    ├Region: string
│    │    │    ├RoleArn: string
│    │    │    └Name: string
│    │    ├[+] type RuleTypeId
│    │    │ ├  documentation: The ID for the rule type, which is made up of the combined values for category, owner, provider, and version.
│    │    │ │  name: RuleTypeId
│    │    │ └ properties
│    │    │    ├Owner: string
│    │    │    ├Category: string
│    │    │    ├Version: string
│    │    │    └Provider: string
│    │    ├[~] type StageDeclaration
│    │    │ └ properties
│    │    │    ├[+] BeforeEntry: BeforeEntryConditions
│    │    │    └[+] OnSuccess: SuccessConditions
│    │    └[+] type SuccessConditions
│    │      ├  documentation: The conditions for making checks that, if met, succeed a stage.
│    │      │  name: SuccessConditions
│    │      └ properties
│    │         └Conditions: Array<Condition>
│    └[~] resource AWS::CodePipeline::Webhook
│      ├ properties
│      │  └ Authentication: (documentation changed)
│      └ types
│         └[~] type WebhookAuthConfiguration
│           └ properties
│              └ SecretToken: (documentation changed)
├[~] service aws-cognito
│ └ resources
│    ├[~] resource AWS::Cognito::LogDeliveryConfiguration
│    │ ├  - documentation: The logging parameters of a user pool.
│    │ │  + documentation: The logging parameters of a user pool returned in response to `GetLogDeliveryConfiguration` .
│    │ ├ properties
│    │ │  ├ LogConfigurations: (documentation changed)
│    │ │  └ UserPoolId: (documentation changed)
│    │ └ types
│    │    ├[~] type CloudWatchLogsConfiguration
│    │    │ └  - documentation: The CloudWatch logging destination of a user pool detailed activity logging configuration.
│    │    │    + documentation: Configuration for the CloudWatch log group destination of user pool detailed activity logging, or of user activity log export with advanced security features.
│    │    ├[+] type FirehoseConfiguration
│    │    │ ├  name: FirehoseConfiguration
│    │    │ └ properties
│    │    │    └StreamArn: string
│    │    ├[~] type LogConfiguration
│    │    │ └ properties
│    │    │    ├ CloudWatchLogsConfiguration: (documentation changed)
│    │    │    ├ EventSource: (documentation changed)
│    │    │    ├[+] FirehoseConfiguration: FirehoseConfiguration
│    │    │    ├ LogLevel: (documentation changed)
│    │    │    └[+] S3Configuration: S3Configuration
│    │    └[+] type S3Configuration
│    │      ├  name: S3Configuration
│    │      └ properties
│    │         └BucketArn: string
│    └[~] resource AWS::Cognito::UserPool
│      └ types
│         └[~] type PasswordPolicy
│           └ properties
│              └[+] PasswordHistorySize: integer
├[~] service aws-datapipeline
│ └ resources
│    └[~] resource AWS::DataPipeline::Pipeline
│      └  - documentation: The AWS::DataPipeline::Pipeline resource specifies a data pipeline that you can use to automate the movement and transformation of data. In each pipeline, you define pipeline objects, such as activities, schedules, data nodes, and resources. For information about pipeline objects and components that you can use, see [Pipeline Object Reference](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-objects.html) in the *AWS Data Pipeline Developer Guide* .
│         The `AWS::DataPipeline::Pipeline` resource adds tasks, schedules, and preconditions to the specified pipeline. You can use `PutPipelineDefinition` to populate a new pipeline.
│         `PutPipelineDefinition` also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following validation errors exist in the pipeline.
│         - An object is missing a name or identifier field.
│         - A string or reference field is empty.
│         - The number of objects in the pipeline exceeds the allowed maximum number of objects.
│         - The pipeline is in a FINISHED state.
│         Pipeline object definitions are passed to the [PutPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_PutPipelineDefinition.html) action and returned by the [GetPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_GetPipelineDefinition.html) action.
│         + documentation: The AWS::DataPipeline::Pipeline resource specifies a data pipeline that you can use to automate the movement and transformation of data.
│         > AWS Data Pipeline is no longer available to new customers. Existing customers of AWS Data Pipeline can continue to use the service as normal. [Learn more](https://docs.aws.amazon.com/big-data/migrate-workloads-from-aws-data-pipeline/) 
│         In each pipeline, you define pipeline objects, such as activities, schedules, data nodes, and resources.
│         The `AWS::DataPipeline::Pipeline` resource adds tasks, schedules, and preconditions to the specified pipeline. You can use `PutPipelineDefinition` to populate a new pipeline.
│         `PutPipelineDefinition` also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following validation errors exist in the pipeline.
│         - An object is missing a name or identifier field.
│         - A string or reference field is empty.
│         - The number of objects in the pipeline exceeds the allowed maximum number of objects.
│         - The pipeline is in a FINISHED state.
│         Pipeline object definitions are passed to the [PutPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_PutPipelineDefinition.html) action and returned by the [GetPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_GetPipelineDefinition.html) action.
├[~] service aws-ec2
│ └ resources
│    ├[~] resource AWS::EC2::LaunchTemplate
│    │ └ types
│    │    └[~] type LaunchTemplateData
│    │      └ properties
│    │         └ ImageId: (documentation changed)
│    ├[~] resource AWS::EC2::NetworkInsightsAnalysis
│    │ └ types
│    │    └[~] type AnalysisRouteTableRoute
│    │      └ properties
│    │         └ destinationPrefixListId: (documentation changed)
│    ├[~] resource AWS::EC2::TransitGatewayAttachment
│    │ └ types
│    │    └[~] type Options
│    │      └ properties
│    │         └[-] SecurityGroupReferencingSupport: string
│    ├[~] resource AWS::EC2::TransitGatewayMulticastGroupMember
│    │ └ attributes
│    │    └ SourceType: (documentation changed)
│    ├[~] resource AWS::EC2::TransitGatewayMulticastGroupSource
│    │ └ attributes
│    │    └ MemberType: (documentation changed)
│    └[~] resource AWS::EC2::VPCEndpoint
│      └  - documentation: Specifies a VPC endpoint. A VPC endpoint provides a private connection between your VPC and an endpoint service. You can use an endpoint service provided by AWS , an AWS Marketplace Partner, or another AWS accounts in your organization. For more information, see the [AWS PrivateLink User Guide](https://docs.aws.amazon.com/vpc/latest/privatelink/) .
│         An endpoint of type `Interface` establishes connections between the subnets in your VPC and an AWS service , your own service, or a service hosted by another AWS account . With an interface VPC endpoint, you specify the subnets in which to create the endpoint and the security groups to associate with the endpoint network interfaces.
│         An endpoint of type `gateway` serves as a target for a route in your route table for traffic destined for Amazon S3 or DynamoDB . You can specify an endpoint policy for the endpoint, which controls access to the service from your VPC. You can also specify the VPC route tables that use the endpoint. For more information about connectivity to Amazon S3 , see [Why can't I connect to an S3 bucket using a gateway VPC endpoint?](https://docs.aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint)
│         An endpoint of type `GatewayLoadBalancer` provides private connectivity between your VPC and virtual appliances from a service provider.
│         + documentation: Specifies a VPC endpoint. A VPC endpoint provides a private connection between your VPC and an endpoint service. You can use an endpoint service provided by AWS , an AWS Marketplace Partner, or another AWS accounts in your organization. For more information, see the [AWS PrivateLink User Guide](https://docs.aws.amazon.com/vpc/latest/privatelink/) .
│         An endpoint of type `Interface` establishes connections between the subnets in your VPC and an  , your own service, or a service hosted by another AWS account . With an interface VPC endpoint, you specify the subnets in which to create the endpoint and the security groups to associate with the endpoint network interfaces.
│         An endpoint of type `gateway` serves as a target for a route in your route table for traffic destined for Amazon S3 or DynamoDB . You can specify an endpoint policy for the endpoint, which controls access to the service from your VPC. You can also specify the VPC route tables that use the endpoint. For more information about connectivity to Amazon S3 , see [Why can't I connect to an S3 bucket using a gateway VPC endpoint?](https://docs.aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint)
│         An endpoint of type `GatewayLoadBalancer` provides private connectivity between your VPC and virtual appliances from a service provider.
├[~] service aws-ecs
│ └ resources
│    ├[~] resource AWS::ECS::Service
│    │ └ types
│    │    └[~] type AwsVpcConfiguration
│    │      └  - documentation: An object representing the networking details for a task or service. For example `awsvpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}`
│    │         + documentation: An object representing the networking details for a task or service. For example `awsVpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}` .
│    └[~] resource AWS::ECS::TaskSet
│      └ types
│         └[~] type AwsVpcConfiguration
│           └  - documentation: An object representing the networking details for a task or service. For example `awsvpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}`
│              + documentation: An object representing the networking details for a task or service. For example `awsVpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}` .
├[~] service aws-elasticloadbalancingv2
│ └ resources
│    └[~] resource AWS::ElasticLoadBalancingV2::TargetGroup
│      └ types
│         └[~] type TargetGroupAttribute
│           └ properties
│              └ Key: (documentation changed)
├[~] service aws-forecast
│ └ resources
│    ├[~] resource AWS::Forecast::Dataset
│    │ └  - documentation: Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:
│    │    - *`DataFrequency`* - How frequently your historical time-series data is collected.
│    │    - *`Domain`* and *`DatasetType`* - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.
│    │    - *`Schema`* - A schema specifies the fields in the dataset, including the field name and data type.
│    │    After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see [Importing datasets](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│    │    To get a list of all your datasets, use the [ListDatasets](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasets.html) operation.
│    │    For example Forecast datasets, see the [Amazon Forecast Sample GitHub repository](https://docs.aws.amazon.com/https://github.com/aws-samples/amazon-forecast-samples) .
│    │    > The `Status` of a dataset must be `ACTIVE` before you can import training data. Use the [DescribeDataset](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDataset.html) operation to get the status.
│    │    + documentation: Creates an Amazon Forecast dataset.
│    │    > Amazon Forecast is no longer available to new customers. Existing customers of Amazon Forecast can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/machine-learning/transition-your-amazon-forecast-usage-to-amazon-sagemaker-canvas/) 
│    │    The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:
│    │    - *`DataFrequency`* - How frequently your historical time-series data is collected.
│    │    - *`Domain`* and *`DatasetType`* - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.
│    │    - *`Schema`* - A schema specifies the fields in the dataset, including the field name and data type.
│    │    After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see [Importing datasets](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│    │    To get a list of all your datasets, use the [ListDatasets](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasets.html) operation.
│    │    For example Forecast datasets, see the [Amazon Forecast Sample GitHub repository](https://docs.aws.amazon.com/https://github.com/aws-samples/amazon-forecast-samples) .
│    │    > The `Status` of a dataset must be `ACTIVE` before you can import training data. Use the [DescribeDataset](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDataset.html) operation to get the status.
│    └[~] resource AWS::Forecast::DatasetGroup
│      └  - documentation: Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the [UpdateDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_UpdateDatasetGroup.html) operation.
│         After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see [Dataset groups](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│         To get a list of all your datasets groups, use the [ListDatasetGroups](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasetGroups.html) operation.
│         > The `Status` of a dataset group must be `ACTIVE` before you can use the dataset group to create a predictor. To get the status, use the [DescribeDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDatasetGroup.html) operation.
│         + documentation: Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the [UpdateDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_UpdateDatasetGroup.html) operation.
│         > Amazon Forecast is no longer available to new customers. Existing customers of Amazon Forecast can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/machine-learning/transition-your-amazon-forecast-usage-to-amazon-sagemaker-canvas/) 
│         After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see [Dataset groups](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│         To get a list of all your datasets groups, use the [ListDatasetGroups](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasetGroups.html) operation.
│         > The `Status` of a dataset group must be `ACTIVE` before you can use the dataset group to create a predictor. To get the status, use the [DescribeDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDatasetGroup.html) operation.
├[~] service aws-kinesisfirehose
│ └ resources
│    └[~] resource AWS::KinesisFirehose::DeliveryStream
│      └ types
│         └[~] type MSKSourceConfiguration
│           └ properties
│              └[+] ReadFromTimestamp: string
├[~] service aws-lambda
│ └ resources
│    ├[~] resource AWS::Lambda::Function
│    │ └ types
│    │    └[~] type Code
│    │      └ properties
│    │         └[+] SourceKMSKeyArn: string
│    └[~] resource AWS::Lambda::Permission
│      └ properties
│         ├ Principal: (documentation changed)
│         ├ SourceAccount: (documentation changed)
│         └ SourceArn: (documentation changed)
├[~] service aws-medialive
│ └ resources
│    └[~] resource AWS::MediaLive::Multiplexprogram
│      └ attributes
│         └ ChannelId: (documentation changed)
├[~] service aws-networkfirewall
│ └ resources
│    └[~] resource AWS::NetworkFirewall::LoggingConfiguration
│      └ types
│         └[~] type LogDestinationConfig
│           └ properties
│              └ LogType: (documentation changed)
├[~] service aws-networkmanager
│ └ resources
│    ├[~] resource AWS::NetworkManager::ConnectAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    ├[~] resource AWS::NetworkManager::CoreNetwork
│    │ ├ attributes
│    │ │  └[+] NetworkFunctionGroups: Array<CoreNetworkNetworkFunctionGroup>
│    │ └ types
│    │    ├[+] type CoreNetworkNetworkFunctionGroup
│    │    │ ├  documentation: Describes a network function group.
│    │    │ │  name: CoreNetworkNetworkFunctionGroup
│    │    │ └ properties
│    │    │    ├Name: string
│    │    │    ├EdgeLocations: Array<string>
│    │    │    └Segments: Segments
│    │    └[+] type Segments
│    │      ├  name: Segments
│    │      └ properties
│    │         ├SendTo: Array<string>
│    │         └SendVia: Array<string>
│    ├[~] resource AWS::NetworkManager::SiteToSiteVpnAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    ├[~] resource AWS::NetworkManager::TransitGatewayRouteTableAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    └[~] resource AWS::NetworkManager::VpcAttachment
│      ├ properties
│      │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│      ├ attributes
│      │  └[+] NetworkFunctionGroupName: string
│      └ types
│         └[+] type ProposedNetworkFunctionGroupChange
│           ├  documentation: Describes proposed changes to a network function group.
│           │  name: ProposedNetworkFunctionGroupChange
│           └ properties
│              ├Tags: Array<tag>
│              ├AttachmentPolicyRuleNumber: integer
│              └NetworkFunctionGroupName: string
├[~] service aws-osis
│ └ resources
│    └[~] resource AWS::OSIS::Pipeline
│      └ types
│         ├[~] type VpcAttachmentOptions
│         │ ├  - documentation: Options for attaching a VPC to the pipeline.
│         │ │  + documentation: Options for attaching a VPC to pipeline.
│         │ └ properties
│         │    └ AttachToVpc: (documentation changed)
│         └[~] type VpcOptions
│           └ properties
│              └ VpcAttachmentOptions: (documentation changed)
├[~] service aws-pipes
│ └ resources
│    └[~] resource AWS::Pipes::Pipe
│      └ types
│         └[~] type S3LogDestination
│           └ properties
│              └ OutputFormat: (documentation changed)
├[~] service aws-rds
│ └ resources
│    └[~] resource AWS::RDS::DBInstance
│      └ properties
│         ├ RestoreTime: (documentation changed)
│         └ UseLatestRestorableTime: (documentation changed)
├[~] service aws-redshift
│ └ resources
│    └[~] resource AWS::Redshift::Cluster
│      └ types
│         └[~] type LoggingProperties
│           └ properties
│              ├[+] LogDestinationType: string
│              └[+] LogExports: Array<string>
├[~] service aws-rolesanywhere
│ └ resources
│    └[~] resource AWS::RolesAnywhere::Profile
│      └ properties
│         └[+] AcceptRoleSessionName: boolean
├[~] service aws-route53resolver
│ └ resources
│    └[~] resource AWS::Route53Resolver::ResolverRule
│      └ properties
│         ├[+] DelegationRecord: string
│         └ DomainName: - string (required, immutable?)
│                       + string (immutable?)
├[~] service aws-s3
│ └ resources
│    ├[~] resource AWS::S3::AccessPoint
│    │ └ types
│    │    └[~] type PublicAccessBlockConfiguration
│    │      └ properties
│    │         └ RestrictPublicBuckets: (documentation changed)
│    ├[~] resource AWS::S3::Bucket
│    │ └ types
│    │    └[~] type PublicAccessBlockConfiguration
│    │      └ properties
│    │         └ RestrictPublicBuckets: (documentation changed)
│    └[~] resource AWS::S3::MultiRegionAccessPoint
│      └ types
│         └[~] type PublicAccessBlockConfiguration
│           └ properties
│              └ RestrictPublicBuckets: (documentation changed)
├[~] service aws-s3objectlambda
│ └ resources
│    └[~] resource AWS::S3ObjectLambda::AccessPoint
│      └ types
│         └[~] type PublicAccessBlockConfiguration
│           └ properties
│              └ RestrictPublicBuckets: (documentation changed)
├[~] service aws-sagemaker
│ └ resources
│    └[~] resource AWS::SageMaker::ModelPackage
│      ├ properties
│      │  └ ModelCard: (documentation changed)
│      └ types
│         ├[~] type ModelAccessConfig
│         │ ├  - documentation: Specifies the access configuration file for the ML model.
│         │ │  + documentation: The access configuration file to control access to the ML model. You can explicitly accept the model end-user license agreement (EULA) within the `ModelAccessConfig` .
│         │ │  - If you are a Jumpstart user, see the [End-user license agreements](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-choose.html#jumpstart-foundation-models-choose-eula) section for more details on accepting the EULA.
│         │ │  - If you are an AutoML user, see the *Optional Parameters* section of *Create an AutoML job to fine-tune text generation models using the API* for details on [How to set the EULA acceptance when fine-tuning a model using the AutoML API](https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-create-experiment-finetune-llms.html#autopilot-llms-finetuning-api-optional-params) .
│         │ └ properties
│         │    └ AcceptEula: (documentation changed)
│         ├[~] type ModelCard
│         │ ├  - documentation: The model card associated with the model package.
│         │ │  + documentation: An Amazon SageMaker Model Card.
│         │ └ properties
│         │    └ ModelCardStatus: (documentation changed)
│         ├[~] type ModelDataSource
│         │ └  - documentation: Specifies the location of ML model data to deploy during endpoint creation.
│         │    + documentation: Specifies the location of ML model data to deploy. If specified, you must specify one and only one of the available data sources.
│         └[~] type S3ModelDataSource
│           └ properties
│              ├ CompressionType: (documentation changed)
│              ├ ModelAccessConfig: (documentation changed)
│              └ S3DataType: (documentation changed)
├[~] service aws-securityhub
│ └ resources
│    ├[~] resource AWS::SecurityHub::AutomationRule
│    │ └ types
│    │    └[~] type AutomationRulesFindingFilters
│    │      └ properties
│    │         └ ResourceId: (documentation changed)
│    ├[~] resource AWS::SecurityHub::ConfigurationPolicy
│    │ └ types
│    │    └[~] type Policy
│    │      └ properties
│    │         └ SecurityHub: (documentation changed)
│    ├[~] resource AWS::SecurityHub::Insight
│    │ └ types
│    │    └[~] type AwsSecurityFindingFilters
│    │      └ properties
│    │         └ ComplianceSecurityControlId: (documentation changed)
│    └[~] resource AWS::SecurityHub::SecurityControl
│      └ properties
│         └ SecurityControlId: (documentation changed)
└[~] service aws-ssm
  └ resources
     └[~] resource AWS::SSM::PatchBaseline
       └ types
          └[~] type Rule
            └ properties
               ├ ApproveAfterDays: (documentation changed)
               └ ApproveUntilDate: (documentation changed)

Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`
@aws-cdk-automation aws-cdk-automation added auto-approve contribution/core This is a PR that came from AWS. dependencies This issue is a problem in a dependency or a pull request that updates a dependency file. pr-linter/exempt-readme The PR linter will not require README changes pr-linter/exempt-test The PR linter will not require test changes pr-linter/exempt-integ-test The PR linter will not require integ test changes labels Aug 12, 2024
@aws-cdk-automation aws-cdk-automation requested a review from a team August 12, 2024 13:44
@github-actions github-actions bot added the p2 label Aug 12, 2024
@aws-cdk-automation aws-cdk-automation requested a review from a team August 12, 2024 13:44
@aws-cdk-automation
Copy link
Collaborator Author

AWS CodeBuild CI Report

  • CodeBuild project: AutoBuildv2Project1C6BFA3F-wQm2hXv2jqQv
  • Commit ID: 8d74912
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

Copy link
Contributor

mergify bot commented Aug 12, 2024

Thank you for contributing! Your pull request will be updated from main and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork).

@mergify mergify bot merged commit 62a641c into main Aug 12, 2024
37 of 38 checks passed
@mergify mergify bot deleted the automation/spec-update branch August 12, 2024 14:16
Copy link

Comments on closed issues and PRs are hard for our team to see.
If you need help, please open a new issue that references this one.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 12, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
auto-approve contribution/core This is a PR that came from AWS. dependencies This issue is a problem in a dependency or a pull request that updates a dependency file. p2 pr-linter/exempt-integ-test The PR linter will not require integ test changes pr-linter/exempt-readme The PR linter will not require README changes pr-linter/exempt-test The PR linter will not require test changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant