diff --git a/apis/transfer-2018-11-05.normal.json b/apis/transfer-2018-11-05.normal.json index 20d5b46e59..408e9df4ca 100644 --- a/apis/transfer-2018-11-05.normal.json +++ b/apis/transfer-2018-11-05.normal.json @@ -1788,7 +1788,7 @@ }, "EncryptionAlgorithm": { "shape": "EncryptionAlg", - "documentation": "
The algorithm that is used to encrypt the file.
" + "documentation": "The algorithm that is used to encrypt the file.
You can only specify NONE
if the URL for your connector uses HTTPS. This ensures that no traffic is sent in clear text.
The signing algorithm for the MDN response.
If set to DEFAULT (or not set at all), the value for SigningAlogorithm
is used.
The signing algorithm for the MDN response.
If set to DEFAULT (or not set at all), the value for SigningAlgorithm
is used.
Specifies the location for the file being copied. Only applicable for Copy type workflow steps. Use ${Transfer:username}
in this field to parametrize the destination prefix by username.
Specifies the location for the file being copied. Use ${Transfer:username}
or ${Transfer:UploadDate}
in this field to parametrize the destination prefix by username or uploaded date.
Set the value of DestinationFileLocation
to ${Transfer:username}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.
Set the value of DestinationFileLocation
to ${Transfer:UploadDate}
to copy uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.
The system resolves UploadDate
to a date format of YYYY-MM-DD, based on the date the file is uploaded.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
A flag that indicates whether to overwrite an existing file of the same name. The default is FALSE
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
Enter ${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
Enter ${original.file}
to use the originally-uploaded file location as input for this step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter ${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
To use the originally uploaded file location as input for this step, enter ${original.file}
.
Each step type has its own StepDetails
structure.
The landing directory (folder) for files transferred by using the AS2 protocol.
A BaseDirectory
example is DOC-EXAMPLE-BUCKET/home/mydirectory.
The landing directory (folder) for files transferred by using the AS2 protocol.
A BaseDirectory
example is /DOC-EXAMPLE-BUCKET/home/mydirectory
.
The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option. You can create a stronger key by using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N \"\" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Update host keys for your SFTP-enabled server in the Transfer Family User Guide.
" + "documentation": "The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option. You can create a stronger key by using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N \"\" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Manage host keys for your SFTP-enabled server in the Transfer Family User Guide.
" }, "IdentityProviderDetails": { "shape": "IdentityProviderDetails", @@ -2165,7 +2165,7 @@ }, "Protocols": { "shape": "Protocols", - "documentation": "Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be AWS_DIRECTORY_SERVICE
or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set to SERVICE_MANAGED
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be either AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set any of the supported identity types: SERVICE_MANAGED
, AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In additon to a workflow to execute when a file is uploaded completely, WorkflowDeatails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In addition to a workflow to execute when a file is uploaded completely, WorkflowDetails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
The public portion of the Secure Shell (SSH) key used to authenticate the user to the server.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
" + "documentation": "The public portion of the Secure Shell (SSH) key used to authenticate the user to the server.
The three standard SSH public key format elements are <key type>
, <body base64>
, and an optional <comment>
, with spaces between each element.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
For RSA keys, the key type is ssh-rsa
.
For ED25519 keys, the key type is ssh-ed25519
.
For ECDSA keys, the key type is either ecdsa-sha2-nistp256
, ecdsa-sha2-nistp384
, or ecdsa-sha2-nistp521
, depending on the size of the key you generated.
Specifies the details for the steps that are in the specified workflow.
The TYPE
specifies which of the following actions is being taken for this step.
COPY: Copy the file to another location.
CUSTOM: Perform a custom step with an Lambda function target.
DELETE: Delete the file.
TAG: Add a tag to the file.
Currently, copying and tagging are supported only on S3.
For file location, you specify either the S3 bucket and key, or the EFS file system ID and path.
" + "documentation": "Specifies the details for the steps that are in the specified workflow.
The TYPE
specifies which of the following actions is being taken for this step.
COPY
- Copy the file to another location.
CUSTOM
- Perform a custom step with an Lambda function target.
DECRYPT
- Decrypt a file that was encrypted before it was uploaded.
DELETE
- Delete the file.
TAG
- Add a tag to the file.
Currently, copying and tagging are supported only on S3.
For file location, you specify either the Amazon S3 bucket and key, or the Amazon EFS file system ID and path.
" }, "OnExceptionSteps": { "shape": "WorkflowSteps", @@ -2317,7 +2317,7 @@ }, "SourceFileLocation": { "shape": "SourceFileLocation", - "documentation": "Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
Enter ${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
Enter ${original.file}
to use the originally-uploaded file location as input for this step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter ${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
To use the originally uploaded file location as input for this step, enter ${original.file}
.
Each step type has its own StepDetails
structure.
The name of the step, used as an identifier.
" }, "Type": { - "shape": "EncryptionType" + "shape": "EncryptionType", + "documentation": "The type of encryption used. Currently, this value must be PGP
.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter ${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
To use the originally uploaded file location as input for this step, enter ${original.file}
.
A flag that indicates whether to overwrite an existing file of the same name. The default is FALSE
.
Each step type has its own StepDetails
structure.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
Enter ${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
Enter ${original.file}
to use the originally-uploaded file location as input for this step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter ${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
To use the originally uploaded file location as input for this step, enter ${original.file}
.
The name of the step, used to identify the delete step.
" @@ -3214,7 +3219,7 @@ }, "Protocols": { "shape": "Protocols", - "documentation": "Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be AWS_DIRECTORY_SERVICE
or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set to SERVICE_MANAGED
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be either AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set any of the supported identity types: SERVICE_MANAGED
, AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In additon to a workflow to execute when a file is uploaded completely, WorkflowDeatails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In addition to a workflow to execute when a file is uploaded completely, WorkflowDetails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
Describes the properties of a file transfer protocol-enabled server that was specified.
" @@ -3356,7 +3361,7 @@ "documentation": "The pathname for the folder being used by a workflow.
" } }, - "documentation": "Reserved for future use.
" + "documentation": "
Specifies the details for the file location for the file that's being used in the workflow. Only applicable if you are using Amazon Elastic File Systems (Amazon EFS) for storage.
" }, "EfsFileSystemId": { "type": "string", @@ -3486,7 +3491,7 @@ "members": { "StepType": { "shape": "WorkflowStepType", - "documentation": "
One of the available step types.
COPY: Copy the file to another location.
CUSTOM: Perform a custom step with an Lambda function target.
DELETE: Delete the file.
TAG: Add a tag to the file.
One of the available step types.
COPY
- Copy the file to another location.
CUSTOM
- Perform a custom step with an Lambda function target.
DECRYPT
- Decrypt a file that was encrypted before it was uploaded.
DELETE
- Delete the file.
TAG
- Add a tag to the file.
The file that contains the certificate to import.
" + "documentation": "For the CLI, provide a file path for a certificate in URI format. For example, --certificate file://encryption-cert.pem
. Alternatively, you can provide the raw content.
For the SDK, specify the raw content of a certificate file. For example, --certificate \"`cat encryption-cert.pem`\"
.
The file that contains the private key for the certificate that's being imported.
" + "documentation": "For the CLI, provide a file path for a private key in URI format.For example, --private-key file://encryption-key.pem
. Alternatively, you can provide the raw content of the private key file.
For the SDK, specify the raw content of a private key file. For example, --private-key \"`cat encryption-key.pem`\"
The public key portion of an SSH key pair.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
" + "documentation": "The private key portion of an SSH key pair.
Transfer Family accepts RSA, ECDSA, and ED25519 keys.
" }, "Description": { "shape": "HostKeyDescription", @@ -3788,14 +3793,14 @@ "members": { "S3FileLocation": { "shape": "S3InputFileLocation", - "documentation": "Specifies the details for the S3 file being copied.
" + "documentation": "Specifies the details for the Amazon S3 file that's being copied or decrypted.
" }, "EfsFileLocation": { "shape": "EfsFileLocation", - "documentation": "Reserved for future use.
" + "documentation": "Specifies the details for the Amazon Elastic File System (Amazon EFS) file that's being decrypted.
" } }, - "documentation": "Specifies the location for the file being copied. Only applicable for the Copy type of workflow steps.
" + "documentation": "Specifies the location for the file that's being processed.
" }, "ListAccessesRequest": { "type": "structure", @@ -4805,7 +4810,7 @@ "documentation": "The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.
" } }, - "documentation": "Specifies the customer input S3 file location. If it is used inside copyStepDetails.DestinationFileLocation
, it should be the S3 copy destination.
You need to provide the bucket and key. The key can represent either a path or a file. This is determined by whether or not you end the key value with the forward slash (/) character. If the final character is \"/\", then your file is copied to the folder, and its name does not change. If, rather, the final character is alphanumeric, your uploaded file is renamed to the path value. In this case, if a file with that name already exists, it is overwritten.
For example, if your path is shared-files/bob/
, your uploaded files are copied to the shared-files/bob/
, folder. If your path is shared-files/today
, each uploaded file is copied to the shared-files
folder and named today
: each upload overwrites the previous version of the bob file.
Specifies the customer input Amazon S3 file location. If it is used inside copyStepDetails.DestinationFileLocation
, it should be the S3 copy destination.
You need to provide the bucket and key. The key can represent either a path or a file. This is determined by whether or not you end the key value with the forward slash (/) character. If the final character is \"/\", then your file is copied to the folder, and its name does not change. If, rather, the final character is alphanumeric, your uploaded file is renamed to the path value. In this case, if a file with that name already exists, it is overwritten.
For example, if your path is shared-files/bob/
, your uploaded files are copied to the shared-files/bob/
, folder. If your path is shared-files/today
, each uploaded file is copied to the shared-files
folder and named today
: each upload overwrites the previous version of the bob file.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
Enter ${previous.file}
to use the previous file as the input. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
Enter ${original.file}
to use the originally-uploaded file location as input for this step.
Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.
To use the previous file as the input, enter ${previous.file}
. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.
To use the originally uploaded file location as input for this step, enter ${original.file}
.
Each step type has its own StepDetails
structure.
The key/value pairs used to tag a file during the execution of a workflow step.
" @@ -5542,7 +5547,7 @@ }, "HostKey": { "shape": "HostKey", - "documentation": "The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option. You can create a stronger key by using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N \"\" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Update host keys for your SFTP-enabled server in the Transfer Family User Guide.
" + "documentation": "The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.
Use the following command to generate an RSA 2048 bit key with no passphrase:
ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key
.
Use a minimum value of 2048 for the -b
option. You can create a stronger key by using 3072 or 4096.
Use the following command to generate an ECDSA 256 bit key with no passphrase:
ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key
.
Valid values for the -b
option for ECDSA are 256, 384, and 521.
Use the following command to generate an ED25519 key with no passphrase:
ssh-keygen -t ed25519 -N \"\" -f my-new-server-key
.
For all of these commands, you can replace my-new-server-key with a string of your choice.
If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.
For more information, see Manage host keys for your SFTP-enabled server in the Transfer Family User Guide.
" }, "IdentityProviderDetails": { "shape": "IdentityProviderDetails", @@ -5562,7 +5567,7 @@ }, "Protocols": { "shape": "Protocols", - "documentation": "Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be AWS_DIRECTORY_SERVICE
or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set to SERVICE_MANAGED
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:
SFTP
(Secure Shell (SSH) File Transfer Protocol): File transfer over SSH
FTPS
(File Transfer Protocol Secure): File transfer with TLS encryption
FTP
(File Transfer Protocol): Unencrypted file transfer
AS2
(Applicability Statement 2): used for transporting structured business-to-business data
If you select FTPS
, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.
If Protocol
includes either FTP
or FTPS
, then the EndpointType
must be VPC
and the IdentityProviderType
must be either AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes FTP
, then AddressAllocationIds
cannot be associated.
If Protocol
is set only to SFTP
, the EndpointType
can be set to PUBLIC
and the IdentityProviderType
can be set any of the supported identity types: SERVICE_MANAGED
, AWS_DIRECTORY_SERVICE
, AWS_LAMBDA
, or API_GATEWAY
.
If Protocol
includes AS2
, then the EndpointType
must be VPC
, and domain must be Amazon S3.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In additon to a workflow to execute when a file is uploaded completely, WorkflowDeatails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In addition to a workflow to execute when a file is uploaded completely, WorkflowDetails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
To remove an associated workflow from a server, you can provide an empty OnUpload
object, as in the following example.
aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'
Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources
" } }, - "documentation": "Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In additon to a workflow to execute when a file is uploaded completely, WorkflowDeatails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.
In addition to a workflow to execute when a file is uploaded completely, WorkflowDetails
can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when a file is open when the session disconnects.
Currently, the following step types are supported.
COPY: Copy the file to another location.
CUSTOM: Perform a custom step with an Lambda function target.
DELETE: Delete the file.
TAG: Add a tag to the file.
Currently, the following step types are supported.
COPY
- Copy the file to another location.
CUSTOM
- Perform a custom step with an Lambda function target.
DECRYPT
- Decrypt a file that was encrypted before it was uploaded.
DELETE
- Delete the file.
TAG
- Add a tag to the file.
Details for a step that performs a file copy.
Consists of the following values:
A description
An S3 location for the destination of the file copy.
A flag that indicates whether or not to overwrite an existing file of the same name. The default is FALSE
.
Details for a step that performs a file copy.
Consists of the following values:
A description
An Amazon S3 location for the destination of the file copy.
A flag that indicates whether to overwrite an existing file of the same name. The default is FALSE
.
Details for a step that invokes a lambda function.
Consists of the lambda function name, target, and timeout (in seconds).
" + "documentation": "Details for a step that invokes an Lambda function.
Consists of the Lambda function's name, target, and timeout (in seconds).
" }, "DeleteStepDetails": { "shape": "DeleteStepDetails", @@ -5762,10 +5767,11 @@ }, "TagStepDetails": { "shape": "TagStepDetails", - "documentation": "Details for a step that creates one or more tags.
You specify one or more tags: each tag contains a key/value pair.
" + "documentation": "Details for a step that creates one or more tags.
You specify one or more tags. Each tag contains a key-value pair.
" }, "DecryptStepDetails": { - "shape": "DecryptStepDetails" + "shape": "DecryptStepDetails", + "documentation": "Details for a step that decrypts an encrypted file.
Consists of the following values:
A descriptive name
An Amazon S3 or Amazon Elastic File System (Amazon EFS) location for the source file to decrypt.
An S3 or Amazon EFS location for the destination of the file decryption.
A flag that indicates whether to overwrite an existing file of the same name. The default is FALSE
.
The type of encryption that's used. Currently, only PGP encryption is supported.
The basic building block of a workflow.
"