* feat: Add support for EKS hybrid nodes
* feat: Add support for EKS Auto Mode
* chore: Update test directory names
* chore: Clean up examples and tests
* fix: Clean up and last minute changes for GA
* chore: Formatting
* chore: Bump min required version for new features
* fix: Corrects from test/validation on existing clusters
* feat: Add policy for custom tags on EKS Auto Mode, validate examples
* chore: Expand on `CAM` acronym
* chore: Update README to match examples
* feat: Add new output values for OIDC issuer URL and provider that support IPv4/IPv6 dualstack
* chore: Revert addition of `dualstack_oidc_provider`
* fix: Add check for `aws` partition since this is the only partition currently supported
* fix: Revert partition conditional logic
* fix: Ensuring the correct service CIDR and IP family is used in the rendered user data
* chore: Updates from testing and validating
* chore: Fix example destroy instructions
* fix: Only require `cluster_service_cidr` when `create = true`
* chore: Clean up commented out code and add note on check length
* feat: Replace `resolve_conflicts` with `resolve_conflicts_on_create`/`delete`; raise MSV of AWS provider to `v5.0` to support
* fix: Replace dynamic DNS suffix for `sts:AssumeRole` API calls for static suffix
* feat: Add module tag
* feat: Align Karpenter permissions with Karpenter v1beta1/v0.32 permissions from upstream
* refactor: Move `aws-auth` ConfigMap functionality to its own sub-module
* chore: Update examples
* feat: Add state `moved` block for Karpenter Pod Identity role re-name
* fix: Correct variable `create` description
* feat: Add support for cluster access entries
* chore: Bump MSV of Terraform to `1.3`
* fix: Replace defunct kubectl provider with an updated forked equivalent
* chore: Update and validate examples for access entry; clean up provider usage
* docs: Correct double redundant variable descriptions
* feat: Add support for Cloudwatch log group class argument
* fix: Update usage tag placement, fix Karpenter event spelling, add upcoming changes section to upgrade guide
* feat: Update Karpenter module to generalize naming used and align policy with the upstream Karpenter policy
* feat: Add native support for Windows based managed nodegroups similar to AL2 and Bottlerocket
* feat: Update self-managed nodegroup module to use latest features of ASG
* docs: Update and simplify docs
* fix: Correct variable description for AMI types
* fix: Update upgrade guide with changes; rename Karpenter controller resource names to support migrating for users
* docs: Complete upgrade guide docs for migration and changes applied
* Update examples/karpenter/README.md
Co-authored-by: Anton Babenko <anton@antonbabenko.com>
* Update examples/outposts/README.md
Co-authored-by: Anton Babenko <anton@antonbabenko.com>
* Update modules/karpenter/README.md
Co-authored-by: Anton Babenko <anton@antonbabenko.com>
---------
Co-authored-by: Anton Babenko <anton@antonbabenko.com>
BREAKING CHANGES: We remove the dependency on the deprecated `hashicorp/template` provider and use the Terraform built in `templatefile` function. This will broke some workflows due to previously being able to pass in the raw contents of a template file for processing. The `templatefile` function requires a template file that exists before running a plan.
NOTES: Using the [terraform-aws-modules/http](https://registry.terraform.io/providers/terraform-aws-modules/http/latest) provider is a more platform agnostic way to wait for the cluster availability than using a local-exec. With this change we're able to provision EKS clusters and manage the `aws_auth` configmap while still using the `hashicorp/tfc-agent` docker image.
NOTES: The output `cloudwatch_log_group_name` was incorrectly returning the log group name as a list of strings. As a workaround, people were using `module.eks_cluster.cloudwatch_log_group_name[0]` but that was totally inconsistent with output name. Those users can now use `module.eks_cluster.cloudwatch_log_group_name` directly.
NOTES: Starting in v12.1.0 the `cluster_id` output depends on the
`wait_for_cluster` null resource. This means that initialisation of the
kubernetes provider will be blocked until the cluster is really ready,
if the module is set to manage the aws_auth ConfigMap and user followed
the typical Usage Example. kubernetes resources in the same plan do not
need to depend on anything explicitly.
NOTES: New variable `worker_create_cluster_primary_security_group_rules` to allow communication between pods on workers and pods using the primary cluster security group (Managed Node Groups or Fargate). It defaults to `false` to avoid potential conflicts with existing security group rules users may have implemented.
* Remove template_file for generating kubeconfig
Push logic from terraform down to the template. Makes the formatting
slightly easier to follow
* Remove template_file for generating userdata
Updates to the eks_cluster now do not trigger recreation of launch
configurations
* Remove template_file for LT userdata
* Remove template dependency
* Don't fail on destroy, when provider resource was removed
* Update Changelog
* Node groups submodule (#650)
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
* Update Changelog
* Add public access endpoint CIDRs option (terraform-aws-eks#647) (#673)
* Add public access endpoint CIDRs option (terraform-aws-eks#647)
* Update required provider version to 2.44.0
* Fix formatting in docs
* Re-generate docs with terraform-docs 0.7.0 and bump pre-commit-terraform version (#668)
* re-generate docs with terraform-docs 0.7.0
* bump pre-commit-terraform version
* Release 8.0.0 (#662)
* Release 8.0.0
* Update changelog
* remove 'defauls' node group
* Make curl silent
* Update Changelog
Co-authored-by: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
Co-authored-by: Siddarth Prakash <1428486+sidprak@users.noreply.github.com>
Co-authored-by: Thierno IB. BARRY <ibrahima.br@gmail.com>
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
* Add destroy-time flag
* Update changelog
Fix cluster count
* Fix cluster count
* Fix docs
* Fix outputs
* Fix unsupported attribute on cluster_certificate_authority_data output
Co-Authored-By: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
* Remove unnecessary flatten from cluster_endpoint output
Co-Authored-By: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
* Improve description of var.enabled
* Fix errors manifesting when used on an existing-cluster
* Update README.md
* Renamed destroy-time flag
* Revert removal of changelog addition entry
* Update flag name in readme
* Update flag variable name
* Update cluster referencing for consistency
* Update flag name to `create_eks`
* Fixed incorrect count-based reference to aws_eks_cluster.this (there's only one)
* Replaced all incorrect aws_eks_cluster.this[count.index] references (there will be just one, so using '[0]').
* Changelog update, explicitly mentioning flag
* Fixed interpolation deprecation warning
* Fixed outputs to support conditional cluster
* Applied create_eks to aws_auth.tf
* Removed unused variable. Updated Changelog. Formatting.
* Fixed references to aws_eks_cluster.this[0] that would raise errors when setting create_eks to false whilst having launch templates or launch configurations configured.
* Readme and example updates.
* Revert "Readme and example updates."
This reverts commit 18a0746355e136010ad54858a1b518406f6a3638.
* Updated readme section of conditionally creation with provider example.
* Added conditions to node_groups.
* Fixed reversed map_roles check
* Update aws_auth.tf
Revert this due to https://github.com/terraform-aws-modules/terraform-aws-eks/pull/611
This commit changes the way aws auth is managed. Before a local file
was used the generate the template and a null resource to apply it. This
is now switched to the terraform kubernetes provider.