mirror of
https://github.com/ysoftdevs/terraform-aws-eks.git
synced 2026-03-11 21:11:32 +01:00
Fix issue where ConfigMap isn't applied to new cluster (#235)
If you are trying to recover a cluster that was deleted, the current code will not re-apply the ConfigMap because it is already rendered so kubectl command won't get triggered. This change adds the cluster endpoint (which should be different when spinning up a new cluster even with the same name) so we will force a re-render and cause the kubectl command to run.
This commit is contained in:
committed by
Max Williams
parent
91eb56f4aa
commit
03c223131f
@@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/).
|
||||
- Write your awesome addition here (by @you)
|
||||
|
||||
### Changed
|
||||
|
||||
- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt)
|
||||
- Removed invalid action from worker_autoscaling iam policy (by @marcelloromani)
|
||||
- Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani)
|
||||
- Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani)
|
||||
|
||||
@@ -14,6 +14,7 @@ resource "null_resource" "update_config_map_aws_auth" {
|
||||
|
||||
triggers {
|
||||
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
|
||||
endpoint = "${aws_eks_cluster.this.endpoint}"
|
||||
}
|
||||
|
||||
count = "${var.manage_aws_auth ? 1 : 0}"
|
||||
|
||||
Reference in New Issue
Block a user