NOTES: Tags that are passed into `var.worker_groups_launch_template` or `var.worker_groups` now override tags passed in via `var.tags` for Autoscaling Groups only. This allow ASG Tags to be overwritten, so that `propagate_at_launch` can be tweaked for a particular key.
NOTES: The output `cloudwatch_log_group_name` was incorrectly returning the log group name as a list of strings. As a workaround, people were using `module.eks_cluster.cloudwatch_log_group_name[0]` but that was totally inconsistent with output name. Those users can now use `module.eks_cluster.cloudwatch_log_group_name` directly.
NOTES: Keep in mind that changing the order of workers group is a destructive operation. All workers group are destroyed and recreated. If you want to do this safely, you should move then in state with `terraform state mv` until we manage workers groups as maps.
NOTE: The usage of customer managed policy, not an inline policy, for the `cluster_elb_sl_role_creation policy` is common for "enterprise" AWS users to disallow inline policies with an SCP rule for auditing-related reasons, and this accomplishes the same thing.
Additional support for Terraform v0.13 and aws v3!
- The update to the vpc module in examples was, strictly speaking, unnecessary but it adds the terraform block with supported versions.
- Update for iam module in the example was very necessary to support new versions
- Workaround for "Provider produced inconsistent final plan" when creating ASGs at the same time as the cluster. See https://github.com/terraform-providers/terraform-provider-aws/issues/14085 for full details.
- Blacklist 0.13.0 as it was too strict when migrating from aws v2 to v3 about dropped attributes.
BREAKING CHANGES: Default for `cluster_endpoint_private_access_cidrs` is now `null` instead of `["0.0.0.0/0"]`. It makes the variable required when `cluster_create_endpoint_private_access_sg_rule` is set to `true`. This will force everyone who want to have a private access to set explicitly their allowed subnets for the sake of the principle of least access by default.
* Add example for lauch config with mixed lifecycles
* Set what on-demand instance is
* Tweak wording
Co-authored-by: Thomas O'Neill <toneill@new-innov.com>
Co-authored-by: Daniel Piddock <daniel.piddock@teamcmp.com>
Fix for:
Error: Provider produced inconsistent final plan
When expanding the plan for module.eks.random_pet.workers_launch_template[0]
to include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/random" changed the planned action from
CreateThenDelete to DeleteThenCreate.
NOTES: Starting in v12.1.0 the `cluster_id` output depends on the
`wait_for_cluster` null resource. This means that initialisation of the
kubernetes provider will be blocked until the cluster is really ready,
if the module is set to manage the aws_auth ConfigMap and user followed
the typical Usage Example. kubernetes resources in the same plan do not
need to depend on anything explicitly.