Support for Mixed Instances ASG in worker_groups_launch_template variable (#468)

* Create ASG tags via for - utility from terraform 12

* Updated support for mixed ASG in worker_groups_launch_template variable

* Updated launch_template example to include spot and mixed ASG with worker_groups_launch_template variable

* Removed old config

* Removed workers_launch_template_mixed.tf file, added support for mixed/spot in workers_launch_template variable

* Updated examples/spot_instances/main.tf with Mixed Spot and ondemand instances

* Removed launch_template_mixed from relevant files

* Updated README.md file

* Removed workers_launch_template.tf.bkp

* Fixed case with null on_demand_allocation_strategy and Spot allocation

* Fixed workers_launch_template.tf, covered spot instances via Launch Template
This commit is contained in:
Sergiu Plotnicu
2019-09-13 17:50:59 +03:00
committed by Max Williams
parent a47f464221
commit 461cf5482e
12 changed files with 97 additions and 485 deletions

View File

@@ -19,6 +19,9 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Added support for initial lifecycle hooks for autosacling groups (@barryib)
- Added option to recreate ASG when LT or LC changes (by @barryib)
- Ability to specify workers role name (by @ivanich)
- Added support for Mixed Instance ASG using `worker_groups_launch_template` variable (by @sppwf)
- Changed ASG Tags generation using terraform 12 `for` utility (by @sppwf)
- Removed `worker_groups_launch_template_mixed` variable (by @sppwf)
### Changed

View File

@@ -118,7 +118,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) | `[]` | no |
| cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | bool | `"false"` | no |
| cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | bool | `"true"` | no |
| cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false. | string | `""` | no |
| cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage\_cluster\_iam\_resources is set to false. | string | `""` | no |
| cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string | `""` | no |
| cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | number | `"90"` | no |
| cluster\_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | n/a | yes |
@@ -126,7 +126,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| cluster\_version | Kubernetes version to use for the EKS cluster. | string | `"1.14"` | no |
| config\_output\_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Should end in a forward slash `/` . | string | `"./"` | no |
| iam\_path | If provided, all IAM roles will be created on this path. | string | `"/"` | no |
| kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. \["-r", "MyEksRole"\]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_command | Command to use to fetch AWS EKS credentials. | string | `"aws-iam-authenticator"` | no |
| kubeconfig\_aws\_authenticator\_command\_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. | list(string) | `[]` | no |
| kubeconfig\_aws\_authenticator\_env\_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. | map(string) | `{}` | no |
@@ -144,15 +144,14 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| tags | A map of tags to add to all resources. | map(string) | `{}` | no |
| vpc\_id | VPC where the cluster and workers will be deployed. | string | n/a | yes |
| worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | list(string) | `[]` | no |
| worker\_ami\_name\_filter | Additional name filter for AWS EKS worker AMI. Default behaviour will get latest for the cluster_version but could be set to a release from amazon-eks-ami, e.g. "v20190220" | string | `"v*"` | no |
| worker\_ami\_name\_filter | Additional name filter for AWS EKS worker AMI. Default behaviour will get latest for the cluster\_version but could be set to a release from amazon-eks-ami, e.g. "v20190220" | string | `"v*"` | no |
| worker\_create\_security\_group | Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. | bool | `"true"` | no |
| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template\_mixed | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys. | any | `[]` | no |
| worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers\_group\_defaults for valid keys. | any | `[]` | no |
| worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. | any | `[]` | no |
| worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `""` | no |
| worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | number | `"1025"` | no |
| workers\_additional\_policies | Additional policies to be added to workers | list(string) | `[]` | no |
| workers\_group\_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys. | any | `{}` | no |
| workers\_group\_defaults | Override default values for target groups. See workers\_group\_defaults\_defaults in local.tf for valid keys. | any | `{}` | no |
| write\_aws\_auth\_config | Whether to write the aws-auth configmap file. | bool | `"true"` | no |
| write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | bool | `"true"` | no |

View File

@@ -35,21 +35,6 @@ EOS
data "aws_caller_identity" "current" {
}
data "template_file" "launch_template_mixed_worker_role_arns" {
count = local.worker_group_launch_template_mixed_count
template = file("${path.module}/templates/worker-role.tpl")
vars = {
worker_role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${element(
coalescelist(
aws_iam_instance_profile.workers_launch_template_mixed.*.role,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name,
),
count.index,
)}"
}
}
data "template_file" "launch_template_worker_role_arns" {
count = local.worker_group_launch_template_count
template = file("${path.module}/templates/worker-role.tpl")
@@ -91,7 +76,6 @@ data "template_file" "config_map_aws_auth" {
concat(
data.template_file.launch_template_worker_role_arns.*.rendered,
data.template_file.worker_role_arns.*.rendered,
data.template_file.launch_template_mixed_worker_role_arns.*.rendered,
),
),
)

41
data.tf
View File

@@ -147,37 +147,6 @@ data "template_file" "launch_template_userdata" {
}
}
data "template_file" "workers_launch_template_mixed" {
count = local.worker_group_launch_template_mixed_count
template = file("${path.module}/templates/userdata.sh.tpl")
vars = {
cluster_name = aws_eks_cluster.this.name
endpoint = aws_eks_cluster.this.endpoint
cluster_auth_base64 = aws_eks_cluster.this.certificate_authority[0].data
pre_userdata = lookup(
var.worker_groups_launch_template_mixed[count.index],
"pre_userdata",
local.workers_group_defaults["pre_userdata"],
)
additional_userdata = lookup(
var.worker_groups_launch_template_mixed[count.index],
"additional_userdata",
local.workers_group_defaults["additional_userdata"],
)
bootstrap_extra_args = lookup(
var.worker_groups_launch_template_mixed[count.index],
"bootstrap_extra_args",
local.workers_group_defaults["bootstrap_extra_args"],
)
kubelet_extra_args = lookup(
var.worker_groups_launch_template_mixed[count.index],
"kubelet_extra_args",
local.workers_group_defaults["kubelet_extra_args"],
)
}
}
data "aws_iam_role" "custom_cluster_iam_role" {
count = var.manage_cluster_iam_resources ? 0 : 1
name = var.cluster_iam_role_name
@@ -200,13 +169,3 @@ data "aws_iam_instance_profile" "custom_worker_group_launch_template_iam_instanc
local.workers_group_defaults["iam_instance_profile_name"],
)
}
data "aws_iam_instance_profile" "custom_worker_group_launch_template_mixed_iam_instance_profile" {
count = var.manage_worker_iam_resources ? 0 : local.worker_group_launch_template_mixed_count
name = lookup(
var.worker_groups_launch_template_mixed[count.index],
"iam_instance_profile_name",
local.workers_group_defaults["iam_instance_profile_name"],
)
}

View File

@@ -0,0 +1 @@
yum update -y

View File

@@ -56,7 +56,7 @@ module "eks" {
subnets = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
worker_groups_launch_template_mixed = [
worker_groups_launch_template = [
{
name = "spot-1"
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]

View File

@@ -1,5 +1,12 @@
locals {
asg_tags = null_resource.tags_as_list_of_maps.*.triggers
asg_tags = [
for item in keys(var.tags) :
map(
"key", item,
"value", element(values(var.tags), index(keys(var.tags), item)),
"propagate_at_launch", "true"
)
]
cluster_security_group_id = var.cluster_create_security_group ? aws_security_group.cluster[0].id : var.cluster_security_group_id
cluster_iam_role_name = var.manage_cluster_iam_resources ? aws_iam_role.cluster[0].name : var.cluster_iam_role_name
@@ -9,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_launch_template_mixed_count = length(var.worker_groups_launch_template_mixed)
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
workers_group_defaults_defaults = {
name = "count.index" # Name of the worker group. Literal count.index will never be used but if name is not set, the count.index interpolation will be used.
@@ -61,7 +67,7 @@ locals {
market_type = null
# Settings for launch templates with mixed instances policy
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"] # A list of override instance types for mixed instances policy
on_demand_allocation_strategy = "prioritized" # Strategy to use when launching on-demand instances. Valid values: prioritized.
on_demand_allocation_strategy = null # Strategy to use when launching on-demand instances. Valid values: prioritized.
on_demand_base_capacity = "0" # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
on_demand_percentage_above_base_capacity = "0" # Percentage split between on-demand and Spot instances above the base on-demand capacity
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.

View File

@@ -63,7 +63,6 @@ output "workers_asg_arns" {
value = concat(
aws_autoscaling_group.workers.*.arn,
aws_autoscaling_group.workers_launch_template.*.arn,
aws_autoscaling_group.workers_launch_template_mixed.*.arn,
)
}
@@ -72,7 +71,6 @@ output "workers_asg_names" {
value = concat(
aws_autoscaling_group.workers.*.id,
aws_autoscaling_group.workers_launch_template.*.id,
aws_autoscaling_group.workers_launch_template_mixed.*.id,
)
}
@@ -125,7 +123,6 @@ output "worker_iam_role_name" {
aws_iam_role.workers.*.name,
data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_name,
data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_name,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_name,
[""]
)[0]
}
@@ -136,7 +133,6 @@ output "worker_iam_role_arn" {
aws_iam_role.workers.*.arn,
data.aws_iam_instance_profile.custom_worker_group_iam_instance_profile.*.role_arn,
data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile.*.role_arn,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.role_arn,
[""]
)[0]
}

View File

@@ -114,12 +114,6 @@ variable "worker_groups_launch_template" {
default = []
}
variable "worker_groups_launch_template_mixed" {
description = "A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys."
type = any
default = []
}
variable "worker_security_group_id" {
description = "If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster."
type = string

View File

@@ -359,16 +359,6 @@ resource "aws_iam_role_policy_attachment" "workers_additional_policies" {
policy_arn = var.workers_additional_policies[count.index]
}
resource "null_resource" "tags_as_list_of_maps" {
count = length(keys(var.tags))
triggers = {
key = keys(var.tags)[count.index]
value = values(var.tags)[count.index]
propagate_at_launch = "true"
}
}
resource "aws_iam_role_policy_attachment" "workers_autoscaling" {
count = var.manage_worker_iam_resources ? 1 : 0
policy_arn = aws_iam_policy.worker_autoscaling[0].arn

View File

@@ -73,13 +73,81 @@ resource "aws_autoscaling_group" "workers_launch_template" {
local.workers_group_defaults["termination_policies"]
)
launch_template {
id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
dynamic mixed_instances_policy {
iterator = item
for_each = (lookup(var.worker_groups_launch_template[count.index], "override_instance_types", null) != null) || (lookup(var.worker_groups_launch_template[count.index], "on_demand_allocation_strategy", null) != null) ? list(var.worker_groups_launch_template[count.index]) : []
content {
instances_distribution {
on_demand_allocation_strategy = lookup(
item.value,
"on_demand_allocation_strategy",
"prioritized",
)
on_demand_base_capacity = lookup(
item.value,
"on_demand_base_capacity",
local.workers_group_defaults["on_demand_base_capacity"],
)
on_demand_percentage_above_base_capacity = lookup(
item.value,
"on_demand_percentage_above_base_capacity",
local.workers_group_defaults["on_demand_percentage_above_base_capacity"],
)
spot_allocation_strategy = lookup(
item.value,
"spot_allocation_strategy",
local.workers_group_defaults["spot_allocation_strategy"],
)
spot_instance_pools = lookup(
item.value,
"spot_instance_pools",
local.workers_group_defaults["spot_instance_pools"],
)
spot_max_price = lookup(
item.value,
"spot_max_price",
local.workers_group_defaults["spot_max_price"],
)
}
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
}
dynamic "override" {
for_each = lookup(
var.worker_groups_launch_template[count.index],
"override_instance_types",
local.workers_group_defaults["override_instance_types"]
)
content {
instance_type = override.value
}
}
}
}
}
dynamic launch_template {
iterator = item
for_each = (lookup(var.worker_groups_launch_template[count.index], "override_instance_types", null) != null) || (lookup(var.worker_groups_launch_template[count.index], "on_demand_allocation_strategy", null) != null) ? [] : list(var.worker_groups_launch_template[count.index])
content {
id = aws_launch_template.workers_launch_template.*.id[count.index]
version = lookup(
var.worker_groups_launch_template[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
}
}
dynamic "initial_lifecycle_hook" {

View File

@@ -1,388 +0,0 @@
# Worker Groups using Launch Templates with mixed instances policy
resource "aws_autoscaling_group" "workers_launch_template_mixed" {
count = local.worker_group_launch_template_mixed_count
name_prefix = join(
"-",
compact(
[
aws_eks_cluster.this.name,
lookup(var.worker_groups_launch_template_mixed[count.index], "name", count.index),
lookup(var.worker_groups_launch_template_mixed[count.index], "asg_recreate_on_change", local.workers_group_defaults["asg_recreate_on_change"]) ? random_pet.workers_launch_template_mixed[count.index].id : ""
]
)
)
desired_capacity = lookup(
var.worker_groups_launch_template_mixed[count.index],
"asg_desired_capacity",
local.workers_group_defaults["asg_desired_capacity"],
)
max_size = lookup(
var.worker_groups_launch_template_mixed[count.index],
"asg_max_size",
local.workers_group_defaults["asg_max_size"],
)
min_size = lookup(
var.worker_groups_launch_template_mixed[count.index],
"asg_min_size",
local.workers_group_defaults["asg_min_size"],
)
force_delete = lookup(
var.worker_groups_launch_template_mixed[count.index],
"asg_force_delete",
local.workers_group_defaults["asg_force_delete"],
)
target_group_arns = lookup(
var.worker_groups_launch_template_mixed[count.index],
"target_group_arns",
local.workers_group_defaults["target_group_arns"]
)
service_linked_role_arn = lookup(
var.worker_groups_launch_template_mixed[count.index],
"service_linked_role_arn",
local.workers_group_defaults["service_linked_role_arn"],
)
vpc_zone_identifier = lookup(
var.worker_groups_launch_template_mixed[count.index],
"subnets",
local.workers_group_defaults["subnets"]
)
protect_from_scale_in = lookup(
var.worker_groups_launch_template_mixed[count.index],
"protect_from_scale_in",
local.workers_group_defaults["protect_from_scale_in"],
)
suspended_processes = lookup(
var.worker_groups_launch_template_mixed[count.index],
"suspended_processes",
local.workers_group_defaults["suspended_processes"]
)
enabled_metrics = lookup(
var.worker_groups_launch_template_mixed[count.index],
"enabled_metrics",
local.workers_group_defaults["enabled_metrics"]
)
placement_group = lookup(
var.worker_groups_launch_template_mixed[count.index],
"placement_group",
local.workers_group_defaults["placement_group"],
)
termination_policies = lookup(
var.worker_groups_launch_template_mixed[count.index],
"termination_policies",
local.workers_group_defaults["termination_policies"]
)
mixed_instances_policy {
instances_distribution {
on_demand_allocation_strategy = lookup(
var.worker_groups_launch_template_mixed[count.index],
"on_demand_allocation_strategy",
local.workers_group_defaults["on_demand_allocation_strategy"],
)
on_demand_base_capacity = lookup(
var.worker_groups_launch_template_mixed[count.index],
"on_demand_base_capacity",
local.workers_group_defaults["on_demand_base_capacity"],
)
on_demand_percentage_above_base_capacity = lookup(
var.worker_groups_launch_template_mixed[count.index],
"on_demand_percentage_above_base_capacity",
local.workers_group_defaults["on_demand_percentage_above_base_capacity"],
)
spot_allocation_strategy = lookup(
var.worker_groups_launch_template_mixed[count.index],
"spot_allocation_strategy",
local.workers_group_defaults["spot_allocation_strategy"],
)
spot_instance_pools = lookup(
var.worker_groups_launch_template_mixed[count.index],
"spot_instance_pools",
local.workers_group_defaults["spot_instance_pools"],
)
spot_max_price = lookup(
var.worker_groups_launch_template_mixed[count.index],
"spot_max_price",
local.workers_group_defaults["spot_max_price"],
)
}
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.workers_launch_template_mixed.*.id[count.index]
version = lookup(
var.worker_groups_launch_template_mixed[count.index],
"launch_template_version",
local.workers_group_defaults["launch_template_version"],
)
}
dynamic "override" {
for_each = lookup(
var.worker_groups_launch_template_mixed[count.index],
"override_instance_types",
local.workers_group_defaults["override_instance_types"]
)
content {
instance_type = override.value
}
}
}
}
dynamic "initial_lifecycle_hook" {
for_each = lookup(var.worker_groups_launch_template_mixed[count.index], "asg_initial_lifecycle_hooks", local.workers_group_defaults["asg_initial_lifecycle_hooks"])
content {
name = lookup(initial_lifecycle_hook.value, "name", null)
default_result = lookup(initial_lifecycle_hook.value, "default_result", null)
heartbeat_timeout = lookup(initial_lifecycle_hook.value, "heartbeat_timeout", null)
lifecycle_transition = lookup(initial_lifecycle_hook.value, "lifecycle_transition", null)
notification_metadata = lookup(initial_lifecycle_hook.value, "notification_metadata", null)
notification_target_arn = lookup(initial_lifecycle_hook.value, "notification_target_arn", null)
role_arn = lookup(initial_lifecycle_hook.value, "role_arn", null)
}
}
tags = concat(
[
{
"key" = "Name"
"value" = "${aws_eks_cluster.this.name}-${lookup(
var.worker_groups_launch_template_mixed[count.index],
"name",
count.index,
)}-eks_asg"
"propagate_at_launch" = true
},
{
"key" = "kubernetes.io/cluster/${aws_eks_cluster.this.name}"
"value" = "owned"
"propagate_at_launch" = true
},
{
"key" = "k8s.io/cluster-autoscaler/${lookup(
var.worker_groups_launch_template_mixed[count.index],
"autoscaling_enabled",
local.workers_group_defaults["autoscaling_enabled"],
) ? "enabled" : "disabled"}"
"value" = "true"
"propagate_at_launch" = false
},
{
"key" = "k8s.io/cluster-autoscaler/${aws_eks_cluster.this.name}"
"value" = aws_eks_cluster.this.name
"propagate_at_launch" = false
},
{
"key" = "k8s.io/cluster-autoscaler/node-template/resources/ephemeral-storage"
"value" = "${lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_volume_size",
local.workers_group_defaults["root_volume_size"],
)}Gi"
"propagate_at_launch" = false
},
],
local.asg_tags,
lookup(
var.worker_groups_launch_template_mixed[count.index],
"tags",
local.workers_group_defaults["tags"]
)
)
lifecycle {
create_before_destroy = true
ignore_changes = [desired_capacity]
}
}
resource "aws_launch_template" "workers_launch_template_mixed" {
count = local.worker_group_launch_template_mixed_count
name_prefix = "${aws_eks_cluster.this.name}-${lookup(
var.worker_groups_launch_template_mixed[count.index],
"name",
count.index,
)}"
network_interfaces {
associate_public_ip_address = lookup(
var.worker_groups_launch_template_mixed[count.index],
"public_ip",
local.workers_group_defaults["public_ip"],
)
delete_on_termination = lookup(
var.worker_groups_launch_template_mixed[count.index],
"eni_delete",
local.workers_group_defaults["eni_delete"],
)
security_groups = flatten([
local.worker_security_group_id,
var.worker_additional_security_group_ids,
lookup(
var.worker_groups_launch_template_mixed[count.index],
"additional_security_group_ids",
local.workers_group_defaults["additional_security_group_ids"]
)
])
}
iam_instance_profile {
name = coalescelist(
aws_iam_instance_profile.workers_launch_template_mixed.*.name,
data.aws_iam_instance_profile.custom_worker_group_launch_template_mixed_iam_instance_profile.*.name,
)[count.index]
}
image_id = lookup(
var.worker_groups_launch_template_mixed[count.index],
"ami_id",
local.workers_group_defaults["ami_id"],
)
instance_type = lookup(
var.worker_groups_launch_template_mixed[count.index],
"instance_type",
local.workers_group_defaults["instance_type"],
)
key_name = lookup(
var.worker_groups_launch_template_mixed[count.index],
"key_name",
local.workers_group_defaults["key_name"],
)
user_data = base64encode(
data.template_file.workers_launch_template_mixed.*.rendered[count.index],
)
ebs_optimized = lookup(
var.worker_groups_launch_template_mixed[count.index],
"ebs_optimized",
lookup(
local.ebs_optimized,
lookup(
var.worker_groups_launch_template_mixed[count.index],
"instance_type",
local.workers_group_defaults["instance_type"],
),
false,
),
)
credit_specification {
cpu_credits = lookup(
var.worker_groups_launch_template_mixed[count.index],
"cpu_credits",
local.workers_group_defaults["cpu_credits"]
)
}
monitoring {
enabled = lookup(
var.worker_groups_launch_template_mixed[count.index],
"enable_monitoring",
local.workers_group_defaults["enable_monitoring"],
)
}
placement {
tenancy = lookup(
var.worker_groups_launch_template_mixed[count.index],
"launch_template_placement_tenancy",
local.workers_group_defaults["launch_template_placement_tenancy"],
)
group_name = lookup(
var.worker_groups_launch_template_mixed[count.index],
"launch_template_placement_group",
local.workers_group_defaults["launch_template_placement_group"],
)
}
block_device_mappings {
device_name = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_block_device_name",
local.workers_group_defaults["root_block_device_name"],
)
ebs {
volume_size = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_volume_size",
local.workers_group_defaults["root_volume_size"],
)
volume_type = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_volume_type",
local.workers_group_defaults["root_volume_type"],
)
iops = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_iops",
local.workers_group_defaults["root_iops"],
)
encrypted = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_encrypted",
local.workers_group_defaults["root_encrypted"],
)
kms_key_id = lookup(
var.worker_groups_launch_template_mixed[count.index],
"root_kms_key_id",
local.workers_group_defaults["root_kms_key_id"],
)
delete_on_termination = true
}
}
tag_specifications {
resource_type = "volume"
tags = merge(
{
"Name" = "${aws_eks_cluster.this.name}-${lookup(
var.worker_groups_launch_template_mixed[count.index],
"name",
count.index,
)}-eks_asg"
},
var.tags,
)
}
tags = var.tags
lifecycle {
create_before_destroy = true
}
}
resource "random_pet" "workers_launch_template_mixed" {
count = local.worker_group_launch_template_mixed_count
separator = "-"
length = 2
keepers = {
lt_name = join(
"-",
compact(
[
aws_launch_template.workers_launch_template_mixed[count.index].name,
aws_launch_template.workers_launch_template_mixed[count.index].latest_version
]
)
)
}
}
resource "aws_iam_instance_profile" "workers_launch_template_mixed" {
count = var.manage_worker_iam_resources ? local.worker_group_launch_template_mixed_count : 0
name_prefix = aws_eks_cluster.this.name
role = lookup(
var.worker_groups_launch_template_mixed[count.index],
"iam_role_id",
local.default_iam_role_id,
)
path = var.iam_path
}