Node groups submodule (#650)

* WIP Move node_groups to a submodule

* Split the old node_groups file up

* Start moving locals

* Simplify IAM creation logic

* depends_on from the TF docs

* Wire in the variables

* Call module from parent

* Allow to customize the role name. As per workers

* aws_auth ConfigMap for node_groups

* Get the managed_node_groups example to plan

* Get the basic example to plan too

* create_eks = false works

"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.

* Update Changelog

* Update README

* Wire in node_groups_defaults

* Remove node_groups from workers_defaults_defaults

* Synchronize random and node_group defaults

* Error: "name_prefix" cannot be longer than 32

* Update READMEs again

* Fix double destroy

Was producing index errors when running destroy on an empty state.

* Remove duplicate iam_role in node_group

I think this logic works. Needs some testing with an externally created
role.

* Fix index fail if node group manually deleted

* Keep aws_auth template in top module

Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue

* Hack to have node_groups depend on aws_auth etc

The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.

Can't pull the IAM role out of the node_group any more.

* Pull variables via the random_pet to cut logic

No point having the same logic in two different places

* Pass all ForceNew variables through the pet

* Do a deep merge of NG labels and tags

* Update README.. again

* Additional managed node outputs #644

Add change from @TBeijin from PR #644

* Remove unused local

* Use more for_each

* Remove the change when create_eks = false

* Make documentation less confusing

* node_group version user configurable

* Pass through raw output from aws_eks_node_groups

* Merge workers defaults in the locals

This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.

* Fix typo

Co-authored-by: Max Williams <max.williams@deliveryhero.com>
This commit is contained in:
Daniel Piddock
2020-01-09 12:53:08 +01:00
committed by Max Williams
parent d79c8ab6f2
commit 11147e9af3
15 changed files with 253 additions and 149 deletions

View File

@@ -26,6 +26,8 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Adding node group iam role arns to outputs. (by @mukgupta) - Adding node group iam role arns to outputs. (by @mukgupta)
- Added the OIDC Provider ARN to outputs. (by @eytanhanig) - Added the OIDC Provider ARN to outputs. (by @eytanhanig)
- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi) - **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)
- Move `eks_node_group` resources to a submodule (by @dpiddockcmp)
- Add complex output `node_groups` (by @TBeijen)
#### Important notes #### Important notes

View File

@@ -181,7 +181,8 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no | | map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no |
| map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | | map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
| map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no | | map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
| node\_groups | A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys. | any | `[]` | no | | node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no |
| node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no |
| permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes | | subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes |
| tags | A map of tags to add to all resources. | map(string) | `{}` | no | | tags | A map of tags to add to all resources. | map(string) | `{}` | no |
@@ -218,7 +219,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. | | config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. |
| kubeconfig | kubectl config file contents for this EKS cluster. | | kubeconfig | kubectl config file contents for this EKS cluster. |
| kubeconfig\_filename | The filename of the generated kubectl config. | | kubeconfig\_filename | The filename of the generated kubectl config. |
| node\_groups\_iam\_role\_arns | IAM role ARNs for EKS node groups | | node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys |
| oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | | oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. |
| worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | | worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |
| worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` | | worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |

View File

@@ -43,13 +43,10 @@ data "template_file" "worker_role_arns" {
} }
data "template_file" "node_group_arns" { data "template_file" "node_group_arns" {
count = var.create_eks ? local.worker_group_managed_node_group_count : 0 count = var.create_eks ? length(module.node_groups.aws_auth_roles) : 0
template = file("${path.module}/templates/worker-role.tpl") template = file("${path.module}/templates/worker-role.tpl")
vars = { vars = module.node_groups.aws_auth_roles[count.index]
worker_role_arn = lookup(var.node_groups[count.index], "iam_role_arn", aws_iam_role.node_groups[0].arn)
platform = "linux" # Hardcoded because the EKS API currently only supports linux for managed node groups
}
} }
resource "kubernetes_config_map" "aws_auth" { resource "kubernetes_config_map" "aws_auth" {

View File

@@ -92,27 +92,29 @@ module "eks" {
vpc_id = module.vpc.vpc_id vpc_id = module.vpc.vpc_id
node_groups = [ node_groups_defaults = {
{ ami_type = "AL2_x86_64"
name = "example" disk_size = 50
}
node_group_desired_capacity = 1 node_groups = {
node_group_max_capacity = 10 example = {
node_group_min_capacity = 1 desired_capacity = 1
max_capacity = 10
min_capacity = 1
instance_type = "m5.large" instance_type = "m5.large"
node_group_k8s_labels = { k8s_labels = {
Environment = "test" Environment = "test"
GithubRepo = "terraform-aws-eks" GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules" GithubOrg = "terraform-aws-modules"
} }
node_group_additional_tags = { additional_tags = {
Environment = "test" ExtraTag = "example"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
} }
} }
] defaults = {}
}
map_roles = var.map_roles map_roles = var.map_roles
map_users = var.map_users map_users = var.map_users

View File

@@ -23,3 +23,7 @@ output "region" {
value = var.region value = var.region
} }
output "node_groups" {
description = "Outputs from node groups"
value = module.eks.node_groups
}

View File

@@ -16,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0] default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name
worker_group_count = length(var.worker_groups) worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template) worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_managed_node_group_count = length(var.node_groups)
default_ami_id_linux = data.aws_ami.eks_worker.id default_ami_id_linux = data.aws_ami.eks_worker.id
default_ami_id_windows = data.aws_ami.eks_worker_windows.id default_ami_id_windows = data.aws_ami.eks_worker_windows.id
@@ -80,15 +79,6 @@ locals {
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity. spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify." spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU
ami_release_version = "" # AMI Release Version of the Managed Node Groups
source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD
node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group
node_group_desired_capacity = 1 # Desired capacity of the Node Group
node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1)
node_group_max_capacity = 3 # Max capacity of the Node Group
node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation
node_group_additional_tags = {} # Additional tags to be applied to the Node Groups
} }
workers_group_defaults = merge( workers_group_defaults = merge(
@@ -133,7 +123,4 @@ locals {
"t2.small", "t2.small",
"t2.xlarge" "t2.xlarge"
] ]
node_groups = { for node_group in var.node_groups : node_group["name"] => node_group }
} }

View File

@@ -0,0 +1,55 @@
# eks `node_groups` submodule
Helper submodule to create and manage resources related to `eks_node_groups`.
## Assumptions
* Designed for use by the parent module and not directly by end users
## Node Groups' IAM Role
The role ARN specified in `var.default_iam_role_arn` will be used by default. In a simple configuration this will be the worker role created by the parent module.
`iam_role_arn` must be specified in either `var.node_groups_defaults` or `var.node_groups` if the default parent IAM role is not being created for whatever reason, for example if `manage_worker_iam_resources` is set to false in the parent.
## `node_groups` and `node_groups_defaults` keys
`node_groups_defaults` is a map that can take the below keys. Values will be used if not specified in individual node groups.
`node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values.
| Name | Description | Type | If unset |
|------|-------------|:----:|:-----:|
| additional\_tags | Additional tags to apply to node group | map(string) | Only `var.tags` applied |
| ami\_release\_version | AMI version of workers | string | Provider default behavior |
| ami\_type | AMI Type. See Terraform or AWS docs | string | Provider default behavior |
| desired\_capacity | Desired number of workers | number | `var.workers_group_defaults[asg_desired_capacity]` |
| disk\_size | Workers' disk size | number | Provider default behavior |
| iam\_role\_arn | IAM role ARN for workers | string | `var.default_iam_role_arn` |
| instance\_type | Workers' instance type | string | `var.workers_group_defaults[instance_type]` |
| k8s\_labels | Kubernetes labels | map(string) | No labels applied |
| key\_name | Key name for workers. Set to empty string to disable remote access | string | `var.workers_group_defaults[key_name]` |
| max\_capacity | Max number of workers | number | `var.workers_group_defaults[asg_max_size]` |
| min\_capacity | Min number of workers | number | `var.workers_group_defaults[asg_min_size]` |
| source\_security\_group\_ids | Source security groups for remote access to workers | list(string) | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD |
| subnets | Subnets to contain workers | list(string) | `var.workers_group_defaults[subnets]` |
| version | Kubernetes version | string | Provider default behavior |
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs
| Name | Description | Type | Default | Required |
|------|-------------|:----:|:-----:|:-----:|
| cluster\_name | Name of parent cluster | string | n/a | yes |
| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no |
| default\_iam\_role\_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes |
| node\_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no |
| node\_groups\_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes |
| tags | A map of tags to add to all resources | map(string) | n/a | yes |
| workers\_group\_defaults | Workers group defaults from parent | any | n/a | yes |
## Outputs
| Name | Description |
|------|-------------|
| aws\_auth\_roles | Roles for use in aws-auth ConfigMap |
| node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

View File

@@ -0,0 +1,16 @@
locals {
# Merge defaults and per-group values to make code cleaner
node_groups_expanded = { for k, v in var.node_groups : k => merge(
{
desired_capacity = var.workers_group_defaults["asg_desired_capacity"]
iam_role_arn = var.default_iam_role_arn
instance_type = var.workers_group_defaults["instance_type"]
key_name = var.workers_group_defaults["key_name"]
max_capacity = var.workers_group_defaults["asg_max_size"]
min_capacity = var.workers_group_defaults["asg_min_size"]
subnets = var.workers_group_defaults["subnets"]
},
var.node_groups_defaults,
v,
) if var.create_eks }
}

View File

@@ -0,0 +1,49 @@
resource "aws_eks_node_group" "workers" {
for_each = local.node_groups_expanded
node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id])
cluster_name = var.cluster_name
node_role_arn = each.value["iam_role_arn"]
subnet_ids = each.value["subnets"]
scaling_config {
desired_size = each.value["desired_capacity"]
max_size = each.value["max_capacity"]
min_size = each.value["min_capacity"]
}
ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_types = [each.value["instance_type"]]
release_version = lookup(each.value, "ami_release_version", null)
dynamic "remote_access" {
for_each = each.value["key_name"] != "" ? [{
ec2_ssh_key = each.value["key_name"]
source_security_group_ids = lookup(each.value, "source_security_group_ids", [])
}] : []
content {
ec2_ssh_key = remote_access.value["ec2_ssh_key"]
source_security_group_ids = remote_access.value["source_security_group_ids"]
}
}
version = lookup(each.value, "version", null)
labels = merge(
lookup(var.node_groups_defaults, "k8s_labels", {}),
lookup(var.node_groups[each.key], "k8s_labels", {})
)
tags = merge(
var.tags,
lookup(var.node_groups_defaults, "additional_tags", {}),
lookup(var.node_groups[each.key], "additional_tags", {}),
)
lifecycle {
create_before_destroy = true
}
}

View File

@@ -0,0 +1,14 @@
output "node_groups" {
description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values"
value = aws_eks_node_group.workers
}
output "aws_auth_roles" {
description = "Roles for use in aws-auth ConfigMap"
value = [
for k, v in local.node_groups_expanded : {
worker_role_arn = lookup(v, "iam_role_arn", var.default_iam_role_arn)
platform = "linux"
}
]
}

View File

@@ -0,0 +1,21 @@
resource "random_pet" "node_groups" {
for_each = local.node_groups_expanded
separator = "-"
length = 2
keepers = {
ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "disk_size", null)
instance_type = each.value["instance_type"]
iam_role_arn = each.value["iam_role_arn"]
key_name = each.value["key_name"]
source_security_group_ids = join("|", compact(
lookup(each.value, "source_security_group_ids", [])
))
subnet_ids = join("|", each.value["subnets"])
node_group_name = join("-", [var.cluster_name, each.key])
}
}

View File

@@ -0,0 +1,36 @@
variable "create_eks" {
description = "Controls if EKS resources should be created (it affects almost all resources)"
type = bool
default = true
}
variable "cluster_name" {
description = "Name of parent cluster"
type = string
}
variable "default_iam_role_arn" {
description = "ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults`"
type = string
}
variable "workers_group_defaults" {
description = "Workers group defaults from parent"
type = any
}
variable "tags" {
description = "A map of tags to add to all resources"
type = map(string)
}
variable "node_groups_defaults" {
description = "map of maps of node groups to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
}
variable "node_groups" {
description = "Map of maps of `eks_node_groups` to create. See \"`node_groups` and `node_groups_defaults` keys\" section in README.md for more details"
type = any
default = {}
}

View File

@@ -1,112 +1,29 @@
resource "aws_iam_role" "node_groups" { # Hack to ensure ordering of resource creation. Do not create node_groups
count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 # before other resources are ready. Removes race conditions
name = "${var.workers_role_name != "" ? var.workers_role_name : aws_eks_cluster.this[0].name}-managed-node-groups" data "null_data_source" "node_groups" {
assume_role_policy = data.aws_iam_policy_document.workers_assume_role_policy.json count = var.create_eks ? 1 : 0
permissions_boundary = var.permissions_boundary
path = var.iam_path
force_detach_policies = true
tags = var.tags
}
resource "aws_iam_role_policy_attachment" "node_groups_AmazonEKSWorkerNodePolicy" { inputs = {
count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 cluster_name = var.cluster_name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.node_groups[0].name
}
resource "aws_iam_role_policy_attachment" "node_groups_AmazonEKS_CNI_Policy" { # Ensure these resources are created before "unlocking" the data source.
count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 # `depends_on` causes a refresh on every run so is useless here.
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" # [Re]creating or removing these resources will trigger recreation of Node Group resources
role = aws_iam_role.node_groups[0].name aws_auth = coalescelist(kubernetes_config_map.aws_auth[*].id, [""])[0]
} role_NodePolicy = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy[*].id, [""])[0]
role_CNI_Policy = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy[*].id, [""])[0]
resource "aws_iam_role_policy_attachment" "node_groups_AmazonEC2ContainerRegistryReadOnly" { role_Container = coalescelist(aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly[*].id, [""])[0]
count = var.create_eks && local.worker_group_managed_node_group_count > 0 ? 1 : 0 role_autoscaling = coalescelist(aws_iam_role_policy_attachment.workers_autoscaling[*].id, [""])[0]
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.node_groups[0].name
}
resource "aws_iam_role_policy_attachment" "node_groups_additional_policies" {
for_each = var.create_eks && local.worker_group_managed_node_group_count > 0 ? toset(var.workers_additional_policies) : []
role = aws_iam_role.node_groups[0].name
policy_arn = each.key
}
resource "aws_iam_role_policy_attachment" "node_groups_autoscaling" {
count = var.create_eks && var.manage_worker_autoscaling_policy && var.attach_worker_autoscaling_policy && local.worker_group_managed_node_group_count > 0 ? 1 : 0
policy_arn = aws_iam_policy.node_groups_autoscaling[0].arn
role = aws_iam_role.node_groups[0].name
}
resource "aws_iam_policy" "node_groups_autoscaling" {
count = var.create_eks && var.manage_worker_autoscaling_policy && local.worker_group_managed_node_group_count > 0 ? 1 : 0
name_prefix = "eks-worker-autoscaling-${aws_eks_cluster.this[0].name}"
description = "EKS worker node autoscaling policy for cluster ${aws_eks_cluster.this[0].name}"
policy = data.aws_iam_policy_document.worker_autoscaling[0].json
path = var.iam_path
}
resource "random_pet" "node_groups" {
for_each = var.create_eks ? local.node_groups : {}
separator = "-"
length = 2
keepers = {
instance_type = lookup(each.value, "instance_type", local.workers_group_defaults["instance_type"])
ec2_ssh_key = lookup(each.value, "key_name", local.workers_group_defaults["key_name"])
source_security_group_ids = join("-", compact(
lookup(each.value, "source_security_group_ids", local.workers_group_defaults["source_security_group_id"]
)))
node_group_name = join("-", [var.cluster_name, each.value["name"]])
} }
} }
resource "aws_eks_node_group" "workers" { module "node_groups" {
for_each = var.create_eks ? local.node_groups : {} source = "./modules/node_groups"
create_eks = var.create_eks
node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id]) cluster_name = coalescelist(data.null_data_source.node_groups[*].outputs["cluster_name"], [""])[0]
default_iam_role_arn = coalescelist(aws_iam_role.workers[*].arn, [""])[0]
cluster_name = var.cluster_name workers_group_defaults = local.workers_group_defaults
node_role_arn = lookup(each.value, "iam_role_arn", aws_iam_role.node_groups[0].arn) tags = var.tags
subnet_ids = lookup(each.value, "subnets", local.workers_group_defaults["subnets"]) node_groups_defaults = var.node_groups_defaults
node_groups = var.node_groups
scaling_config {
desired_size = lookup(each.value, "node_group_desired_capacity", local.workers_group_defaults["asg_desired_capacity"])
max_size = lookup(each.value, "node_group_max_capacity", local.workers_group_defaults["asg_max_size"])
min_size = lookup(each.value, "node_group_min_capacity", local.workers_group_defaults["asg_min_size"])
}
ami_type = lookup(each.value, "ami_type", null)
disk_size = lookup(each.value, "root_volume_size", null)
instance_types = [lookup(each.value, "instance_type", null)]
labels = lookup(each.value, "node_group_k8s_labels", null)
release_version = lookup(each.value, "ami_release_version", null)
dynamic "remote_access" {
for_each = [
for node_group in [each.value] : {
ec2_ssh_key = node_group["key_name"]
source_security_group_ids = lookup(node_group, "source_security_group_ids", [])
}
if lookup(node_group, "key_name", "") != ""
]
content {
ec2_ssh_key = remote_access.value["ec2_ssh_key"]
source_security_group_ids = remote_access.value["source_security_group_ids"]
}
}
version = aws_eks_cluster.this[0].version
tags = lookup(each.value, "node_group_additional_tags", null)
lifecycle {
create_before_destroy = true
}
} }

View File

@@ -163,10 +163,7 @@ output "worker_autoscaling_policy_arn" {
value = concat(aws_iam_policy.worker_autoscaling[*].arn, [""])[0] value = concat(aws_iam_policy.worker_autoscaling[*].arn, [""])[0]
} }
output "node_groups_iam_role_arns" { output "node_groups" {
description = "IAM role ARNs for EKS node groups" description = "Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys"
value = { value = module.node_groups.node_groups
for node_group in aws_eks_node_group.workers :
node_group.node_group_name => node_group.node_role_arn
}
} }

View File

@@ -282,10 +282,16 @@ variable "create_eks" {
default = true default = true
} }
variable "node_groups" { variable "node_groups_defaults" {
description = "A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys." description = "Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details"
type = any type = any
default = [] default = {}
}
variable "node_groups" {
description = "Map of map of node groups to create. See `node_groups` module's documentation for more details"
type = any
default = {}
} }
variable "enable_irsa" { variable "enable_irsa" {