BREAKING CHANGES: We now decided to remove `random_pet` resources in Managed Node Groups (MNG). Those were used to recreate MNG if something change and also simulate the newly added argument `node_group_name_prefix`. But they were causing a lot of troubles. To upgrade the module without recreating your MNG, you will need to explicitly reuse their previous name and set them in your MNG `name` argument. Please see [upgrade docs](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/upgrades.md#upgrade-module-to-v1700-for-managed-node-groups) for more details.
NOTES: Set an ASG's launch template version to an explicit version automatically. This will ensure that an instance refresh will be triggered whenever the launch template changes. The default `launch_template_version` is now used to determine the latest or default version of the created launch template for self-managed worker groups.
Signed-off-by: Benjamin Ash <bash@intelerad.com>
Co-authored-by: Thierno IB. BARRY <ibrahima.br@gmail.com>
BREAKING CHANGES: We remove the dependency on the deprecated `hashicorp/template` provider and use the Terraform built in `templatefile` function. This will broke some workflows due to previously being able to pass in the raw contents of a template file for processing. The `templatefile` function requires a template file that exists before running a plan.
BREAKING CHANGES: To add add SPOT support for MNG, the `instance_type` is now a list and renamed as `instance_types`. This will probably rebuild existing Managed Node Groups.
NOTES: The EKS cluster can be provisioned with both private and public subnets. But Fargate only accepts private ones. This new variable allows to override the subnets to explicitly pass the private subnets to Fargate and work around that issue.
Additional support for Terraform v0.13 and aws v3!
- The update to the vpc module in examples was, strictly speaking, unnecessary but it adds the terraform block with supported versions.
- Update for iam module in the example was very necessary to support new versions
- Workaround for "Provider produced inconsistent final plan" when creating ASGs at the same time as the cluster. See https://github.com/terraform-providers/terraform-provider-aws/issues/14085 for full details.
- Blacklist 0.13.0 as it was too strict when migrating from aws v2 to v3 about dropped attributes.
BREAKING CHANGE: The terraform-aws-eks module now require at least kubernetes `1.11.1`. This may cause terraform to fail to init if users have set version = "1.10" like we had in the examples.
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>