In this blog post we are going to look at how to use VMware Cloud Assembly auto-scale functionality when building AWS Native Blueprints
Assembly offers integration with AWS
Auto-Scale functionality. You can set it up from your Blueprint editor. It
enables you to set for an auto scaling group for machines, which automatically
increases or decreases resources based on policies.
Configuring Auto-Scale in your Blueprint
Lets configure an example scale policy for a machine.
In your BP editor under machine properties you can find the autoScaleConfguraiton Machine property . This is where you can configure desiredCapacity, maxSize, metricScaleRules, minSize, policy and scheduledScaleRules.
Instead if configuring all of the settings in the yaml code editor lets to it from the Properties tab.
Once on the Properties tab in your editor until you find Rules of the METRIC policy
And Rules of the Scheduled policy. As you can se you can configure a policy for a metric or a scheduled policy that will run on a given interval. Let’s build a Metric policy.
Here are the action of the scale operation you can configure:
• The type of action that should occur when the scale rule fires. – The scaling action that occurs when triggered
• The amount by which to scale, based on the specified adjustment type – The amount by which to scale up or down. Positive values add, and negative values subtract.
• The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start. – The minimum time, in seconds, between scaling actions.
Here is the trigger of the scale operation you can configure:
• Тhe metric that defines what the rule monitors. – The metric that is the catalyst for a scaling action.
• The period, in seconds, over which the specified statistic is applied – The period, in seconds, at which the metric is evaluated. Valid values are 10, 30, and any multiple of 60.
• The operator that is used to compare the metric data and – The comparison operator that is used when evaluating the collected metric data against the threshold.
• The metric statistic type – How metrics from multiple instances are combined.
• The threshold of the metric that triggers the scale action – The threshold that, when reached, triggers the scaling action.
• The number of periods over which data is compared to the specified threshold – The number of periods over which to measure data. The total evaluation time cannot exceed one day, so this number multiplied by the period cannot exceed 86,400 seconds.
Lets say for this example we want to :
• Change the count property of the machine resource upon scale. Therefore, adding additional machine resource.
• We want to add 2 machines each time the policy gets triggered.
• Wait 60 seconds before performing another scale activity.
• I want to scale when average CPU Utilization metric goes above 1
• Measure the average CPU based over 3 periods x 60 seconds = 180 seconds
Usually some of these settings will be higher, , but for the sake of the lab.
My policy looks like this:
The yaml representation of this policy in my code editor would be :
- policy: Metric
- - action:
- type: ChangeCount
- value: 2
- cooldown: 60
- metric: CPUUtilization
- period: 60
- operator: GreaterThan
- statistic: Average
- threshold: 1
- evaluationPeriods: 3
- maxSize: 3
- minSize: 1
Note that I also have specified the minimum and maximum number of nodes in the scaling group. The whole BP example can be found in my gitlab repo at bit.ly/The-Gitlab
To stress the VM once deployed I run this as a cloud-init script:
- cloudConfig: |
- repo_update: true
- repo_upgrade: all
- - git
- - sudo -s
- - yum install -y epel-release
- - yum install -y stress
- - stress --cpu 4 --timeout 4m
Now lets go in AWS under EC2 and select the Auto Scaling Groups tab
Initially we see only one instance of the deployed VM
After the policy gets triggered based on the CPU stress we can see a scale-out taking place
If all went well, go grab a beer.
DISCLAIMER; This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity, unless explicitly stated. Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.
Unless stated, all photos are the work of the blog owner and are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. If used with watermark, no need to credit to the blog owner. For any edit to photos, including cropping, please contact me first.
Unless stated, all recipes are the work of the blog owner and are licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Please credit all recipes to the blog owner and link back to the original blog post.
Any downloadable file, including but not limited to pdfs, docs, jpegs, pngs, is provided at the user’s own risk. The owner will not be liable for any losses, injuries, or damages resulting from a corrupted or damaged file.
Comments are welcome. However, the blog owner reserves the right to edit or delete any comments submitted to this blog without notice due to
– Comments deemed to be spam or questionable spam
– Comments including profanity
– Comments containing language or concepts that could be deemed offensive
– Comments containing hate speech, credible threats, or direct attacks on an individual or group
The blog owner is not responsible for the content in comments.
This policy is subject to change at anytime.