Can I scale AWS Spot instances with CloudWatch signals?
I need to do some server side image manipulation. Typically these images are either loaded or imported into our systems at some nominal level, but sometimes a client is added with a very large volume of images to be processed right away. While these need to be processed within a reasonable amount of time, working on these larger jobs is cheaper more important than getting them done quickly.
As such, I would like to use Spot Spot instances to do this to keep costs as low as possible, keeping the maximum bid price relatively fixed (manually changing it as needed), moving the desired number of instances up and down as the number of messages in the queue fluctuates.
I'm very new to AWS, but here's what I've tried so far (all using AWS Management Consoles) ...
- Create an SQS queue to hold messages about incoming image processing tasks
- Create two CloudWatch alerts
- ScaleIn which warns when ApproximateNumberOfMessagesVisible <= 1 for 300 seconds
- ScaleOut , which notifies that ApproximateNumberOfMessagesVisible > 1 in 300 seconds
- Create a launch configuration that is configured to pay some maximum bid amount for Spot instances.
- Create an autoscale group that uses my launch config to autoscale between 0 and n instances.
- Add two scaling policies to the auto scaling group
- Decrease the size of the group , which removes 1 instance when the ScaleIn alarm is triggered .
- Increase the size of the group , which adds 1 instance when the ScaleOut alarm is triggered .
Then I use the SQS management console to manually add a couple of test messages. It seems that alarms are being triggered, but the following message appears on the Autoscale Group Scaling History tab ...
Description: Description Placing Spot instance request. Status Reason: Max spot instance count exceeded. Placing Spot instance request failed.
Cause: Cause At 2014-08-12T23:12:51Z a difference between desired and actual capacity changing the desired capacity, increasing the capacity from 0 to 1.
Can the maximum number of Spot Instances be managed in an autoscale group this way? If I follow the same procedure, but instead create a regular EC2 Auto Scaling Group / Launch Config (not point instances), the number of instances in the group grows and shrinks as expected.
source to share
According to this AWS document, there is a maximum number of samples you can have in one region:
Space request limits
By default, you are limited to a total of 5 Spot instance requests per region . New AWS accounts may have lower limits. Instance types T2, I2, and HS1 are currently not available for Spot. Also, some types of copies are not available in all regions. (For information on instance types, see Instance Types.)
It looks like you are stumbling across this - you will need to fill out this form to request an increase in the limit.
source to share