Node groups submodule (#650)

* WIP Move node_groups to a submodule

* Split the old node_groups file up

* Start moving locals

* Simplify IAM creation logic

* depends_on from the TF docs

* Wire in the variables

* Call module from parent

* Allow to customize the role name. As per workers

* aws_auth ConfigMap for node_groups

* Get the managed_node_groups example to plan

* Get the basic example to plan too

* create_eks = false works

"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.

* Update Changelog

* Update README

* Wire in node_groups_defaults

* Remove node_groups from workers_defaults_defaults

* Synchronize random and node_group defaults

* Error: "name_prefix" cannot be longer than 32

* Update READMEs again

* Fix double destroy

Was producing index errors when running destroy on an empty state.

* Remove duplicate iam_role in node_group

I think this logic works. Needs some testing with an externally created
role.

* Fix index fail if node group manually deleted

* Keep aws_auth template in top module

Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue

* Hack to have node_groups depend on aws_auth etc

The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.

Can't pull the IAM role out of the node_group any more.

* Pull variables via the random_pet to cut logic

No point having the same logic in two different places

* Pass all ForceNew variables through the pet

* Do a deep merge of NG labels and tags

* Update README.. again

* Additional managed node outputs #644

Add change from @TBeijin from PR #644

* Remove unused local

* Use more for_each

* Remove the change when create_eks = false

* Make documentation less confusing

* node_group version user configurable

* Pass through raw output from aws_eks_node_groups

* Merge workers defaults in the locals

This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.

* Fix typo

Co-authored-by: Max Williams <max.williams@deliveryhero.com>
This commit is contained in:
Daniel Piddock
2020-01-09 12:53:08 +01:00
committed by Max Williams
parent d79c8ab6f2
commit 11147e9af3
15 changed files with 253 additions and 149 deletions

View File

@@ -16,9 +16,8 @@ locals {
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
worker_group_managed_node_group_count = length(var.node_groups)
worker_group_count = length(var.worker_groups)
worker_group_launch_template_count = length(var.worker_groups_launch_template)
default_ami_id_linux = data.aws_ami.eks_worker.id
default_ami_id_windows = data.aws_ami.eks_worker_windows.id
@@ -80,15 +79,6 @@ locals {
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU
ami_release_version = "" # AMI Release Version of the Managed Node Groups
source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD
node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group
node_group_desired_capacity = 1 # Desired capacity of the Node Group
node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1)
node_group_max_capacity = 3 # Max capacity of the Node Group
node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation
node_group_additional_tags = {} # Additional tags to be applied to the Node Groups
}
workers_group_defaults = merge(
@@ -133,7 +123,4 @@ locals {
"t2.small",
"t2.xlarge"
]
node_groups = { for node_group in var.node_groups : node_group["name"] => node_group }
}