From 03c223131fc0a7cf56917120ba8bbdb5c38622b0 Mon Sep 17 00:00:00 2001 From: Erik Lattimore Date: Tue, 15 Jan 2019 13:14:52 +0200 Subject: [PATCH] Fix issue where ConfigMap isn't applied to new cluster (#235) If you are trying to recover a cluster that was deleted, the current code will not re-apply the ConfigMap because it is already rendered so kubectl command won't get triggered. This change adds the cluster endpoint (which should be different when spinning up a new cluster even with the same name) so we will force a re-render and cause the kubectl command to run. --- CHANGELOG.md | 2 +- aws_auth.tf | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index bcaad79..efa5220 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/). - Write your awesome addition here (by @you) ### Changed - +- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt) - Removed invalid action from worker_autoscaling iam policy (by @marcelloromani) - Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani) - Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani) diff --git a/aws_auth.tf b/aws_auth.tf index e7af046..34d14c8 100644 --- a/aws_auth.tf +++ b/aws_auth.tf @@ -14,6 +14,7 @@ resource "null_resource" "update_config_map_aws_auth" { triggers { config_map_rendered = "${data.template_file.config_map_aws_auth.rendered}" + endpoint = "${aws_eks_cluster.this.endpoint}" } count = "${var.manage_aws_auth ? 1 : 0}"