微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

在 2 个或更多实例之间添加“冷却”或“暂停时间”,同时减少所需容量

如何解决在 2 个或更多实例之间添加“冷却”或“暂停时间”,同时减少所需容量

(提前抱歉,因为我是 aws 的新手)。

我正在使用 cloudformation 堆栈来管理我的 ECS 集群。

假设我们有一个 ASG,其所需容量为 5 个 ec2 实例(minSize:1,maxSize:7),我手动将所需容量的值从 5 更改为 2,它减少了 通过集群的更改集的实例数,所有实例都立即关闭。它没有时间在左侧实例上分派前一个容器。 因此,如果从 5 个实例变为 2 个实例,则所有 3 个实例都将直接关闭。如果运气不好,一种类型的所有容器都在 3 台机器上,则容器不再存在并且服务关闭

是否可以在每次终止之间进行“冷却”? 使用扩展策略显然无济于事,因为我们不想设置指标,因为可用指标在我的情况下无济于事。

请在下面找到一些日志:

2021-01-15 15:45:52 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Rolling update initiated. Terminating 3 obsolete instance(s) in batches of 1,while keeping at least 1 instance(s) in service. Waiting on resource signals with a timeout of PT5M when new instances are added to the autoscaling group.
2021-01-15 15:45:52 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Temporarily setting autoscaling group MinSize and DesiredCapacity to 3.
2021-01-15 15:45:54 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Terminating instance(s) [i-X]; replacing with 1 new instance(s).
2021-01-15 15:47:40 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  New instance(s) added to autoscaling group - Waiting on 1 resource signal(s) with a timeout of PT5M.
2021-01-15 15:47:40 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Successfully terminated instance(s) [i-X] (Progress 33%).
2021-01-15 15:52:42 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Terminating instance(s) [i-X]; replacing with 1 new instance(s).
2021-01-15 15:53:59 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  New instance(s) added to autoscaling group - Waiting on 1 resource signal(s) with a timeout of PT5M.
2021-01-15 15:53:59 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Successfully terminated instance(s) [i-X] (Progress 67%).
2021-01-15 15:59:02 UTC+0100    dev-cluster UPDATE_ROLLBACK_IN_PROGRESS The following resource(s) Failed to update: [autoScalingGroup].
2021-01-15 15:59:17 UTC+0100    securityGroup   UPDATE_IN_PROGRESS  -
2021-01-15 15:59:32 UTC+0100    securityGroup   UPDATE_COMPLETE -
2021-01-15 15:59:33 UTC+0100    launchConfiguration UPDATE_COMPLETE -
2021-01-15 15:59:34 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  -
2021-01-15 15:59:37 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Rolling update initiated. Terminating 2 obsolete instance(s) in batches of 1,while keeping at least 1 instance(s) in service. Waiting on resource signals with a timeout of PT5M when new instances are added to the autoscaling group.
2021-01-15 15:59:37 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Temporarily setting autoscaling group MinSize and DesiredCapacity to 3.
2021-01-15 15:59:38 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Terminating instance(s) [i-X]; replacing with 1 new instance(s).
2021-01-15 16:01:25 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  New instance(s) added to autoscaling group - Waiting on 1 resource signal(s) with a timeout of PT5M.
2021-01-15 16:01:25 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Successfully terminated instance(s) [i-X] (Progress 50%).
2021-01-15 16:01:46 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Received SUCCESS signal with UniqueId i-X
2021-01-15 16:01:47 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Terminating instance(s) [i-X]; replacing with 1 new instance(s).
2021-01-15 16:03:34 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  New instance(s) added to autoscaling group - Waiting on 1 resource signal(s) with a timeout of PT5M.
2021-01-15 16:03:34 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Received SUCCESS signal with UniqueId i-X
2021-01-15 16:03:34 UTC+0100    autoScalingGroup    UPDATE_IN_PROGRESS  Successfully terminated instance(s) [i-X] (Progress 100%).
2021-01-15 16:03:37 UTC+0100    autoScalingGroup    UPDATE_COMPLETE -
2021-01-15 16:03:37 UTC+0100    dev-cluster UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS    -
2021-01-15 16:03:38 UTC+0100    launchConfiguration DELETE_IN_PROGRESS  -
2021-01-15 16:03:39 UTC+0100    dev-cluster UPDATE_ROLLBACK_COMPLETE    -
2021-01-15 16:03:39 UTC+0100    launchConfiguration DELETE_COMPLETE -

预先感谢您的帮助!

解决方法

对于您的直接问题,没有任何功能可以强制 ASG 在所需数量下降时仅删除 x 个实例

如果你还没有,你应该在 ASG 上有一个生命周期钩子来触发一个脚本,告诉 ECS 将容器从实例中排出(我从你使用 ECS 的上下文中假设)。在这种情况下,您仍然需要一次手动降低所需的 1。 https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-draining-in-amazon-ecs/

如果您降低了 CloudFormation 中的期望值,那么您可以将 UpdatePolicy 附加到该组,告诉 CFN 执行 RollingUpdate 以批量替换实例 1 https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html

如果您使用 ECS,设置 2 个目标跟踪扩展策略通常是个好主意。 CPUReservation 为 1,MemoryReservation 为 1。如果您想强制 ASG 每次缩减不超过 1 个实例,您也可以根据这些指标手动创建步进扩展策略,但在 CFN 中创建 4 个 cloudwatch 警报会很痛苦

另一种选择是在 ECS 中使用容量提供程序,这将在任何运行任务的实例上启用实例保护

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。