* Added update aws auth configmap when manage_aws_auth set false case
and `write_aws_auth_config` variable for not create the aws_auth files option
* Add CHANGELOG
* Changed writing config file process for Windows compatibility.
* Apply terraform-docs and terraform fmt
* Fixed zsh-specific syntax
* Fixed CHANGELOG.md
If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.
This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.
* Added map_roles_count and user_roles_count (#1)
* Update readme for new vars
* updated tests to include count
* fix syntax error
* updated changelog
* Added map_accounts_count variable for consistency
* Fix counts in example and user latest terraform-docs to generate readme
* Add wait_nodes_max_tries to wait for nodes to be available before applying the kubernetes configurations
* Format variables.tf and aws_auth.tf
* Fix template expansion for wait-nodes-ready.tpl
* Ensuring that kubeconfig is created before its use
* Cleanup wait-nodes-ready script
* Simplify logic to retry application of kubernetes config if failed
* Revert file permission change
* allow creating an IAM role for each worker group
* moved change from 'changed' to 'added'
* create multiple roles not just profiles
* fix config_map_aws_auth generation
* don't duplicate worker-role templating
* specify ARNs for worker groups individually
todo fix aws_auth configmap
* fixed AWS auth
* fix aws_iam_instance_profile.workers name
fix iam_instance_profile fallback
* fix outputs
* fix iam_instance_profile calculation
* hopefully fix aws auth configmap generation
* manually fill out remainder of arn
* remove depends_on in worker_role_arns template file
this was causing resources to be recreated every time
* fmt
* fix typo, move iam_role_id default to defaults map