Skip to content

Updating

amazon-meaisiah edited this page Aug 27, 2020 · 5 revisions

v3.0.3 -> v4.0.0

Updating from v3 to v4 is a bit involved, as a number of major internal changes were made as part of implementing the new features. A couple Lambdas also exist to aid in migration, and you will be using them here.

Do be mindful of specifics in this migration guide, as each detail, no matter how small, is important.

Note: in each of these aws commands, be sure to run them in the region you created the stack in. This may require passing a --region REGION or --profile PROFILE if you don't have AWS configured to point to the right region by default. You will also need to ensure that region is selected in the console.

  1. First, if you haven't already, install Git, the AWS CLI, and the AWS SAM CLI.

  2. Now, open a terminal (or Command Prompt in Windows), and run the following command to get the current source code version. SOURCE_DIRECTORY can be any directory you wish.

    rem Windows
    git clone ^
      --branch create-migrator ^
      --depth 1 ^
      https://github.com/awslabs/aws-api-gateway-developer-portal.git ^
      "%SOURCE_DIRECTORY%"
    cd "%SOURCE_DIRECTORY%"
    # Unix-like
    git clone \
      --branch create-migrator \
      --depth 1 \
      https://github.com/awslabs/aws-api-gateway-developer-portal.git \
      "${SOURCE_DIRECTORY}"
    cd "${SOURCE_DIRECTORY}"
  3. After the above completes successfully, run the following command, where CUSTOM_PREFIX is as explained in the README and TEMP_BUCKET is a bucket name not currently in use, to backup your S3 bucket.

    rem Windows
    aws s3 mb s3://%TEMP_BUCKET%
    aws s3 cp --recursive ^
      s3://%CUSTOM_PREFIX%-dev-portal-artifacts ^
      s3://%TEMP_BUCKET%
    # Unix-like
    aws s3 mb s3://${TEMP_BUCKET}
    aws s3 cp --recursive \
      s3://${CUSTOM_PREFIX}-dev-portal-artifacts \
      s3://${TEMP_BUCKET}

    Context for the commands:

  4. Create a Lambda in the region your stack is deployed in, using the latest version of Node and with administrator privileges. The name of this lambda will be referred to as TEMP_LAMBDA. Copy the source code from here to the newly created lambda and save it accordingly.

  5. Run the following, where STACK_NAME is the name of your existing stack, and verify the output JSON contains a status code between 200 and 299 inclusive.

    rem Windows
    aws lambda invoke ^
      --cli-binary-format raw-in-base64-out ^
      --function-name %TEMP_LAMBDA% ^
      --payload "{""StackName"":""%STACK_NAME%"",""Bucket"":""%TEMP_BUCKET%""}" ^
      response.json
    type response.json
    # Unix-like
    aws lambda invoke \
      --cli-binary-format raw-in-base64-out \
      --function-name ${TEMP_LAMBDA} \
      --payload '{"StackName":"'"${STACK_NAME}"'","Bucket":"'"${TEMP_BUCKET}"'"}' \
      /dev/stdout
  6. Go to CloudFormation, open the stack corresponding to your developer portal instance, write down the parameters currently used in the stack, and delete the stack. Wait for deletion to complete, and if it fails, clear the buckets generated by the stack and/or take other necessary steps to ensure the deletion can complete, and just reattempt until it works.

    WARNING: You should take particular care to write down these variables used in the stack before deleting them, as you'll need them for the next step.

    Also, do NOT delete your deployment bucket or the bucket you created in step 1 - those are essential.

  7. Reinstall the stack as explained here, using the variables you wrote down in the previous step, and wait for deployment to complete.

  8. Run the following command, where STACK_NAME is the name of your new stack (usually the same as in step 5). It will print out a quoted value, and the value without quotes is the IMPORTER_LAMBDA you'll need in the next step.

    rem Windows
    aws cloudformation describe-stack-resource ^
      --stack-name "%STACK_NAME%" ^
      --logical-resource-id UserGroupImporter ^
      --query StackResourceDetail.PhysicalResourceId
    # Unix-like
    aws cloudformation describe-stack-resource \
      --stack-name "${STACK_NAME}" \
      --logical-resource-id UserGroupImporter \
      --query StackResourceDetail.PhysicalResourceId
  9. Run the following command, and verify the output JSON contains a status code between 200 and 299 inclusive.

    rem Windows
    aws lambda invoke ^
      --cli-binary-format raw-in-base64-out ^
      --function-name %IMPORTER_LAMBDA% ^
      --payload "{""Bucket"":""%TEMP_BUCKET%""}" ^
      response.json
    cat response.json
    # Unix-like
    aws lambda invoke \
      --cli-binary-format raw-in-base64-out \
      --function-name ${IMPORTER_LAMBDA} \
      --payload '{"Bucket":"'"${TEMP_BUCKET}"'"}' \
      /dev/stdout

    Note: for AWS CLI v1, omit the --cli-binary-format raw-in-base64-out parameter.

    You can give it explicit customer and feedback table backup ARNs via the optional CustomersBackupArn and FeedbackBackupArn properties, if you want to restore to a specific backup instead of the default latest of each.

    rem Windows
    aws lambda invoke ^
      --cli-binary-format raw-in-base64-out ^
      --function-name %IMPORTER_LAMBDA% ^
      --payload "{""Bucket"":""%TEMP_BUCKET%""}" ^
      response.json
    cat response.json
    # Unix-like
    aws lambda invoke \
      --cli-binary-format raw-in-base64-out \
      --function-name ${IMPORTER_LAMBDA} \
      --payload '{"Bucket":"'"${TEMP_BUCKET}"'"}' \
      /dev/stdout
  10. Run the following commands, where TEMP_BUCKET and CUSTOM_PREFIX are as explained in step 1.

    rem Windows
    aws s3 cp --recursive ^
      --exclude dev-portal-migrate/* ^
      s3://%TEMP_BUCKET% ^
      s3://%CUSTOM_PREFIX%-dev-portal-artifacts
    aws s3 rb s3://%TEMP_BUCKET% --force
    # Unix-like
    aws s3 cp --recursive \
      --exclude 'dev-portal-migrate/*' \
      s3://${TEMP_BUCKET} \
      s3://${CUSTOM_PREFIX}-dev-portal-artifacts
    aws s3 rb s3://${TEMP_BUCKET} --force

    The bucket is removed --force is used as the bucket's non-empty, but the data in it is no longer needed and would be deleted regardless.

    Context for the commands:

Resource requirements

Note: depending on the number of users in your pool and your existing account quotas, you may need to request a temporary quota increase from Cognito. You also may need to provision some DynamoDB capacity temporarily as well. Here's a rough estimate on the resources you'll need to have available, based on the number of users n you need to migrate.

  • Cognito:
    • n / 600 requests per second (rounded up) for AdminAddUserToGroup.
    • n / 600 requests per second (rounded up) for AdminListGroupsForUser.
    • n / 15000 requests per second (rounded up) for ListUsers.
  • DynamoDB:
    • n / 600 read capacity units (rounded up)
    • n / 600 write capacity units (rounded up)

You can request such a quota increase by reaching out to AWS Support.

If you need to request a quota increase, please be sure to include a link to this migration section as part of your reason for why.

v2.3.3 -> v3.0.0

Updating from v2 to v3 should be a minimally invasive process. To upgrade using SAM:

  • Download version 3.0.0 of the developer portal repository.

In the command below, replace your-lambda-artifacts-bucket-name with the name of a bucket that you manage and that already exists. Then, run:

  • sam package --template-file ./cloudformation/template.yaml --output-template-file ./cloudformation/packaged.yaml --s3-bucket your-lambda-artifacts-bucket-name

In the command below, replace your-lambda-artifacts-bucket-name with the name of a bucket that you manage and that exists. Replace custom-prefix in the command below with some prefix that is globally unique, like your org name or username. Verify that your stack is named "dev-portal", or replace "dev-portal" with the name of your stack. Then, run:

  • sam deploy --template-file ./cloudformation/packaged.yaml --stack-name "dev-portal" --s3-bucket your-lambda-artifacts-bucket-name --capabilities CAPABILITY_NAMED_IAM --parameter-overrides StaticAssetRebuildToken="v3.0.0" DevPortalSiteS3BucketName="custom-prefix-dev-portal-static-assets" ArtifactsS3BucketName="custom-prefix-dev-portal-artifacts"
  • Then, navigate to the S3 console, find the bucket named "custom-prefix-dev-portal-artifacts" (where your custom-prefix is the prefix from the above command), and download and delete all the files under /catalog. The v3.0.0 dev portal expects the swagger files in the catalog to be named in a particular format.
  • To repopulate the catalog, create an admin user and use the admin panel interface as documented in the README.
Clone this wiki locally