NOTES: Tags that are passed into `var.worker_groups_launch_template` or `var.worker_groups` now override tags passed in via `var.tags` for Autoscaling Groups only. This allow ASG Tags to be overwritten, so that `propagate_at_launch` can be tweaked for a particular key.
NOTES: The output `cloudwatch_log_group_name` was incorrectly returning the log group name as a list of strings. As a workaround, people were using `module.eks_cluster.cloudwatch_log_group_name[0]` but that was totally inconsistent with output name. Those users can now use `module.eks_cluster.cloudwatch_log_group_name` directly.
Additional support for Terraform v0.13 and aws v3!
- The update to the vpc module in examples was, strictly speaking, unnecessary but it adds the terraform block with supported versions.
- Update for iam module in the example was very necessary to support new versions
- Workaround for "Provider produced inconsistent final plan" when creating ASGs at the same time as the cluster. See https://github.com/terraform-providers/terraform-provider-aws/issues/14085 for full details.
- Blacklist 0.13.0 as it was too strict when migrating from aws v2 to v3 about dropped attributes.
BREAKING CHANGES: Default for `cluster_endpoint_private_access_cidrs` is now `null` instead of `["0.0.0.0/0"]`. It makes the variable required when `cluster_create_endpoint_private_access_sg_rule` is set to `true`. This will force everyone who want to have a private access to set explicitly their allowed subnets for the sake of the principle of least access by default.
NOTES: Starting in v12.1.0 the `cluster_id` output depends on the
`wait_for_cluster` null resource. This means that initialisation of the
kubernetes provider will be blocked until the cluster is really ready,
if the module is set to manage the aws_auth ConfigMap and user followed
the typical Usage Example. kubernetes resources in the same plan do not
need to depend on anything explicitly.
NOTES: Addition of the IMDSv2 metadata configuration block to Launch Templates will cause a diff to be generated for existing Launch Templates on first Terraform apply. The defaults match existing behaviour.
NOTES: New variable `worker_create_cluster_primary_security_group_rules` to allow communication between pods on workers and pods using the primary cluster security group (Managed Node Groups or Fargate). It defaults to `false` to avoid potential conflicts with existing security group rules users may have implemented.
BREAKING CHANGES: The default `cluster_version` is now 1.16. Kubernetes 1.16 includes a number of deprecated API removals, and you need to ensure your applications and add ons are updated, or workloads could fail after the upgrade is complete. For more information on the API removals, see the [Kubernetes blog post](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/). For action you may need to take before upgrading, see the steps in the [EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html). Please set explicitly your `cluster_version` to an older EKS version until your workloads are ready for Kubernetes 1.16.
* Create kubeconfig with non-executable permissions
Kubeconfig does not really need to be executable, so let's not create it with executable bit set.
* Bump tf version
* Remove template_file for generating kubeconfig
Push logic from terraform down to the template. Makes the formatting
slightly easier to follow
* Remove template_file for generating userdata
Updates to the eks_cluster now do not trigger recreation of launch
configurations
* Remove template_file for LT userdata
* Remove template dependency
* Add support for EC2 principal in assume worker role policy for China AWS
* Remove local partition according to requested change
Co-authored-by: Valeri GOLUBEV <vgolubev@kyriba.com>
BREAKING CHANGE: The terraform-aws-eks module now require at least kubernetes `1.11.1`. This may cause terraform to fail to init if users have set version = "1.10" like we had in the examples.
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* Configurable local exec command for waiting until cluster is healthy (#1)
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* change log
* Configurable local exec wait 4 cluster op (#2)
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* change log
* changelog (#3)
* Changelog (#4)
* changelog
* changelog
* simplify wait_for_cluster command
* readme
* no op for manage auth false
* formatting
* docs? not sure
* linter
* specify dependency to wait for cluster more accurately
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>