Fix issue where ConfigMap isn't applied to new cluster (#235)

If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.

This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.
This commit is contained in:
Erik Lattimore
2019-01-15 13:14:52 +02:00
committed by Max Williams
parent 91eb56f4aa
commit 03c223131f
2 changed files with 2 additions and 1 deletions

View File

@@ -14,6 +14,7 @@ resource "null_resource" "update_config_map_aws_auth" {
triggers {
config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}"
endpoint = "${aws_eks_cluster.this.endpoint}"
}
count = "${var.manage_aws_auth ? 1 : 0}"