Fix issue where ConfigMap isn't applied to new cluster (#235)

If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.

This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.
This commit is contained in:
Erik Lattimore
2019-01-15 13:14:52 +02:00
committed by Max Williams
parent 91eb56f4aa
commit 03c223131f
2 changed files with 2 additions and 1 deletions

View File

@@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/).
- Write your awesome addition here (by @you)
### Changed
- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt)
- Removed invalid action from worker_autoscaling iam policy (by @marcelloromani)
- Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani)
- Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani)

View File

@@ -14,6 +14,7 @@ resource "null_resource" "update_config_map_aws_auth" {
triggers {
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
endpoint = "${aws_eks_cluster.this.endpoint}"
}
count = "${var.manage_aws_auth ? 1 : 0}"