AWS: setting up auto-scale for EC2 instances

Posted by Elton Stoneman on Geeks with Blogs See other posts from Geeks with Blogs or by Elton Stoneman
Published on Wed, 16 Oct 2013 15:15:15 GMT Indexed on 2013/10/17 16:00 UTC
Read the original article Hit count: 612

Filed under:

Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspx

With Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance.

This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console.

I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances.

To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf).

If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code.

When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console:

image

When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool.

Now we’re ready to go.

1. Create a launch configuration

This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this:

 as-create-launch-config 
  sc-xyz-launcher # name of the launch config 
  --image-id ami-7b9e9f12  # id of the AMI you extracted from your VM 
  --region eu-west-1 # which region the new instance gets created in 
  --instance-type t1.micro  # size of the instance to create 
  --group quicklaunch-1 #security group for the new instance

2. Create an auto-scaling group

The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances:

 as-create-auto-scaling-group 
  sc-xyz-asg  # auto-scaling group name 
  --region eu-west-1  # region to create in 
  --launch-configuration sc-xyz-launcher  # name of the launch config to invoke for new instances 
  --min-size 1 # minimum number of nodes in the group 
  --max-size 5  # maximum number of nodes in the group 
  --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required 
  --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them

3. Create a scale-up policy

The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up:

 as-put-scaling-policy 
  scale-up-policy # policy name 
  -g sc-psod-woker-asg # auto-scaling group the policy works with 
  --adjustment 1 # size of the adjustment 
  --region eu-west-1 # region 
  --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50%

4. Create a scale-down policy

The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline:

 as-put-scaling-policy 
  scale-down-policy 
  -g sc-psod-woker-asg 
  "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value 
  –-type ChangeInCapacity 
  --region eu-west-1

5. Create a “high” CloudWatch alarm

We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard.

Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute:

image

Then link the alarm to the scale-up policy in your group:

image

6. Create a “low” CloudWatch alarm

The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy.

And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

© Geeks with Blogs or respective owner

Related posts about aws