How can I ensure Cloudfront has the correct version of an asset when performing rolling deployment?

We are currently using Capifony and ec2-capify to deploy the deployment of our code with a set of instances behind ELB. We also use CloudFront to manage static resources that we use with a query string (like? V1 or? V2).

We encountered a rare issue updating the asset version. If the current version is v1, and we are rolling out v2 to one server at a time, then for the requests in block v2 the following might happen:

  • CloudFront will ask for v2 and skip.
  • CloudFront goes to ELB and requests an asset.
  • The ELB picks a server and one of two things happens: Cloudfront goes to one of the new deployed servers (serving v2) OR it goes to the old servers (v1).
  • Either way, Cloudfront saves the content as v2. In the case when it got to the v1 server, the content will be displayed incorrectly.

Our current solution is that we should do another deployment with a new version of the asset.

Is there a way to force Cloudfront via ELB to hit only one of our updated (v2) servers and ignore v1?

Or am I missing an alternative solution that would resolve the problem?

+3


source to share


3 answers


The approach we took was to ditch our existing asset deployment pipeline entirely. In the "old" way, we chose a model asset.css?v=<version>

where CloudFront was pointed to the origin, which was served by multiple instances.

The way we decided is to move to the asset model of hash name and S3 based radix. This means that instead asset.css?v=<version>

we have asset-<hash-of-contents>.css

that sync with the S3 bucket. The bucket is gradually adding new and new versions, but old versions are always available if we decide to go back or if something like linking to it via email (a common problem with images).

The sync to the S3 script is done before we go to our web servers, which contain the HTML that references the assets, so CloudFront can always serve the last asset.



Here's a sample script:

#!/usr/bin/env bash

set -e # Fail on any error
set -x

if [ "$#" -ne "1" ]
then
    echo "Usage: call with the name of the environment you're deploying to"
    exit 1
fi

CDN_ENVIRONMENT=$1

S3_BUCKET="s3://static-bucket-name-of-your-choice/${CDN_ENVIRONMENT}/"

echo "Generating assets

... do the asset generation here ...

echo "Copying to S3"

# Now do the actual copying of the web dir. We use size-only because otherwise all files are newer, and all get copied.
aws s3 sync --exclude "some-folder-to-exclude/*" --acl public-read --size-only ./web/ ${S3_BUCKET}

echo "Copy to S3 complete"

      

0


source


I think the correct deployment strategy in your case would be to first deploy instances that can serve both v1 and v2 resources (but still serve v1) and then do another deployment to switch to v2.



Also there are "sticky sessions" on the ELB ( http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_StickySessions.html ) but I don't see how this can be used here - per user cookie will remove the benefits CloudFront caching.

+1


source


When Cloudfront receives a 404 from your origin (presumably because it hasn't received a new build yet) it will cache that 404 for 5 minutes. You can change this behavior by creating a new "Custom Error Response" for your distribution. The custom answer allows you to set a very low TTL so that CloudView will go to your ELB until it finds a new file. The downside to this is that Cloudfront will actually no longer cache 404s - your ELB will have to handle this load (which is hopefully small!)

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HTTPStatusCodes.html#HTTPStatusCodes-no-custom-error-pages

0


source







All Articles