mirror of
https://github.com/ysoftdevs/terraform-aws-eks.git
synced 2026-04-21 08:11:17 +02:00
Fix issue where ConfigMap isn't applied to new cluster (#235)
If you are trying to recover a cluster that was deleted, the current code will not re-apply the ConfigMap because it is already rendered so kubectl command won't get triggered. This change adds the cluster endpoint (which should be different when spinning up a new cluster even with the same name) so we will force a re-render and cause the kubectl command to run.
This commit is contained in:
committed by
Max Williams
parent
91eb56f4aa
commit
03c223131f
@@ -12,7 +12,7 @@ project adheres to [Semantic Versioning](http://semver.org/).
|
||||
- Write your awesome addition here (by @you)
|
||||
|
||||
### Changed
|
||||
|
||||
- Updated the `update_config_map_aws_auth` resource to trigger when the EKS cluster endpoint changes. This likely means that a new cluster was spun up so our ConfigMap won't exist (fixes #234) (by @elatt)
|
||||
- Removed invalid action from worker_autoscaling iam policy (by @marcelloromani)
|
||||
- Fixed zsh-specific syntax in retry loop for aws auth config map (by @marcelloromani)
|
||||
- Fix: fail deployment if applying the aws auth config map still fails after 10 attempts (by @marcelloromani)
|
||||
|
||||
Reference in New Issue
Block a user