* defaulting to data lookup if worker_group_defaults have no ami_id entry
* using coalesce instead of lookup and also using local instead of var.
* adding defaults support for specifying windows based amis
* Switch Validate github action to use env vars
* update changelog after release
* Update CHANGELOG.md
Co-Authored-By: Thierno IB. BARRY <ibrahima.br@gmail.com>
Co-authored-by: Thierno IB. BARRY <ibrahima.br@gmail.com>
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* Configurable local exec command for waiting until cluster is healthy (#1)
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* change log
* Configurable local exec wait 4 cluster op (#2)
* Configurable local exec command for waiting until cluster is healthy
* readme
* line feeds
* format
* fix readme
* fix readme
* change log
* changelog (#3)
* Changelog (#4)
* changelog
* changelog
* simplify wait_for_cluster command
* readme
* no op for manage auth false
* formatting
* docs? not sure
* linter
* specify dependency to wait for cluster more accurately
* Don't fail on destroy, when provider resource was removed
* Update Changelog
* Node groups submodule (#650)
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
* Update Changelog
* Add public access endpoint CIDRs option (terraform-aws-eks#647) (#673)
* Add public access endpoint CIDRs option (terraform-aws-eks#647)
* Update required provider version to 2.44.0
* Fix formatting in docs
* Re-generate docs with terraform-docs 0.7.0 and bump pre-commit-terraform version (#668)
* re-generate docs with terraform-docs 0.7.0
* bump pre-commit-terraform version
* Release 8.0.0 (#662)
* Release 8.0.0
* Update changelog
* remove 'defauls' node group
* Make curl silent
* Update Changelog
Co-authored-by: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
Co-authored-by: Siddarth Prakash <1428486+sidprak@users.noreply.github.com>
Co-authored-by: Thierno IB. BARRY <ibrahima.br@gmail.com>
* WIP Move node_groups to a submodule
* Split the old node_groups file up
* Start moving locals
* Simplify IAM creation logic
* depends_on from the TF docs
* Wire in the variables
* Call module from parent
* Allow to customize the role name. As per workers
* aws_auth ConfigMap for node_groups
* Get the managed_node_groups example to plan
* Get the basic example to plan too
* create_eks = false works
"The true and false result expressions must have consistent types. The
given expressions are object and object, respectively."
Well, that's useful. But apparently set(string) and set() are ok. So
everything else is more complicated. Thanks.
* Update Changelog
* Update README
* Wire in node_groups_defaults
* Remove node_groups from workers_defaults_defaults
* Synchronize random and node_group defaults
* Error: "name_prefix" cannot be longer than 32
* Update READMEs again
* Fix double destroy
Was producing index errors when running destroy on an empty state.
* Remove duplicate iam_role in node_group
I think this logic works. Needs some testing with an externally created
role.
* Fix index fail if node group manually deleted
* Keep aws_auth template in top module
Downside: count causes issues as usual: can't use distinct() in the
child module so there's a template render for every node_group even if
only one role is really in use. Hopefully just output noise instead of
technical issue
* Hack to have node_groups depend on aws_auth etc
The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes
can join the cluster. This breaks the kubernetes resource which cannot
do a force create. Remove the race condition with explicit depend.
Can't pull the IAM role out of the node_group any more.
* Pull variables via the random_pet to cut logic
No point having the same logic in two different places
* Pass all ForceNew variables through the pet
* Do a deep merge of NG labels and tags
* Update README.. again
* Additional managed node outputs #644
Add change from @TBeijin from PR #644
* Remove unused local
* Use more for_each
* Remove the change when create_eks = false
* Make documentation less confusing
* node_group version user configurable
* Pass through raw output from aws_eks_node_groups
* Merge workers defaults in the locals
This simplifies the random_pet and aws_eks_node_group logic. Which was
causing much consernation on the PR.
* Fix typo
Co-authored-by: Max Williams <max.williams@deliveryhero.com>
* wait for cluster to respond before creating auth config map
* adds changelog entry
* fixup tf format
* fixup kubernetes required version
* fixup missing local for kubeconfig_filename
* combine wait for cluster into provisioner on cluster; change status check to /healthz on endpoint
* fix: make kubernetes provider version more permissive
* Fix aws-auth config map for managed node groups
This change adds the IAM role used for each managed node group to the
aws-auth config map. This fixes an issue where managed nodes could not
access the EKS kubernetes API server.
* update changelog
* fix format
* add comment
Co-authored-by: Max Williams <max.williams@deliveryhero.com>