* Add destroy-time flag
* Update changelog
Fix cluster count
* Fix cluster count
* Fix docs
* Fix outputs
* Fix unsupported attribute on cluster_certificate_authority_data output
Co-Authored-By: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
* Remove unnecessary flatten from cluster_endpoint output
Co-Authored-By: Daniel Piddock <33028589+dpiddockcmp@users.noreply.github.com>
* Improve description of var.enabled
* Fix errors manifesting when used on an existing-cluster
* Update README.md
* Renamed destroy-time flag
* Revert removal of changelog addition entry
* Update flag name in readme
* Update flag variable name
* Update cluster referencing for consistency
* Update flag name to `create_eks`
* Fixed incorrect count-based reference to aws_eks_cluster.this (there's only one)
* Replaced all incorrect aws_eks_cluster.this[count.index] references (there will be just one, so using '[0]').
* Changelog update, explicitly mentioning flag
* Fixed interpolation deprecation warning
* Fixed outputs to support conditional cluster
* Applied create_eks to aws_auth.tf
* Removed unused variable. Updated Changelog. Formatting.
* Fixed references to aws_eks_cluster.this[0] that would raise errors when setting create_eks to false whilst having launch templates or launch configurations configured.
* Readme and example updates.
* Revert "Readme and example updates."
This reverts commit 18a0746355e136010ad54858a1b518406f6a3638.
* Updated readme section of conditionally creation with provider example.
* Added conditions to node_groups.
* Fixed reversed map_roles check
* Update aws_auth.tf
Revert this due to https://github.com/terraform-aws-modules/terraform-aws-eks/pull/611
* remove empty [] to mapRoles object in aws-auth
Simply having ${yamlencode(var.map_roles)} in mapRoles for aws-auth
creates a empty [] at the end after adding the default roles.
Changing it to be added only when its not empty
* Update aws_auth.tf
This commit changes the way aws auth is managed. Before a local file
was used the generate the template and a null resource to apply it. This
is now switched to the terraform kubernetes provider.
* Add Windows support
* Assign eks:kube-proxy-windows group to worker nodes
* Add Instructions for adding Windows Workers at FAQ.md
* Remove unnecessary variables from userdata_windows.tpl
* Update CHANGELOG.md
* Create ASG tags via for - utility from terraform 12
* Updated support for mixed ASG in worker_groups_launch_template variable
* Updated launch_template example to include spot and mixed ASG with worker_groups_launch_template variable
* Removed old config
* Removed workers_launch_template_mixed.tf file, added support for mixed/spot in workers_launch_template variable
* Updated examples/spot_instances/main.tf with Mixed Spot and ondemand instances
* Removed launch_template_mixed from relevant files
* Updated README.md file
* Removed workers_launch_template.tf.bkp
* Fixed case with null on_demand_allocation_strategy and Spot allocation
* Fixed workers_launch_template.tf, covered spot instances via Launch Template
* Support map users and roles to multiple groups
* Simplify code by rename `user_arn` to `userarn`, `role_arn` to `rolearn`
* Next version should be 6.x because PR this is a breaking change.
* Update example variables.tf
* Change indent to 2
* Fix map-aws-auth.yaml maybe invalid yaml.
* run terraform upgrade tool
* fix post upgrade TODOs
* use strict typing for variables
* upgrade examples, point them at VPC module tf 0.12 PR
* remove unnecessary `coalesce()` calls
coalesce(lookup(map, key, ""), default) -> lookup(map, key, default)
* Fix autoscaling_enabled broken (#1)
* always set a value for tags, fix coalescelist calls
* always set a value for these tags
* fix tag value
* fix tag value
* default element available
* added default value
* added a general default
without this default - TF is throwing an error when running a destroy
* Fix CI
* Change vpc module back to `terraform-aws-modules/vpc/aws` in example
* Update CHANGELOG.md
* Change type of variable `cluster_log_retention_in_days` to number
* Remove `xx_count` variables
* Actual lists instead of strings with commas
* Remove `xx_count` variable from docs
* Replace element with list indexing
* Change variable `worker_group_tags` to a attribute of worker_group
* Fix workers_launch_template_mixed tags
* Change override_instance_type_x variables to list.
* Update CHANGELOG.md
* adding 3 examples
* removing old example
* updating PR template
* fix this typo
* update after renaming default example
* add missing launch_template_mixed stuff to aws_auth
* fix 2 examples with public subnets
* update changelog for new minor release
* Adding new mixed type of worker group with instance overrides and mixed instances policy
* moving all count and lifecycle rule parameters to top/bottom
* adding custom IAM parts
* updating doc with new options
* fixes for spot instances
* Added update aws auth configmap when manage_aws_auth set false case
and `write_aws_auth_config` variable for not create the aws_auth files option
* Add CHANGELOG
* Changed writing config file process for Windows compatibility.
* Apply terraform-docs and terraform fmt
* Fixed zsh-specific syntax
* Fixed CHANGELOG.md
If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.
This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.
* Added map_roles_count and user_roles_count (#1)
* Update readme for new vars
* updated tests to include count
* fix syntax error
* updated changelog
* Added map_accounts_count variable for consistency
* Fix counts in example and user latest terraform-docs to generate readme
* Add wait_nodes_max_tries to wait for nodes to be available before applying the kubernetes configurations
* Format variables.tf and aws_auth.tf
* Fix template expansion for wait-nodes-ready.tpl
* Ensuring that kubeconfig is created before its use
* Cleanup wait-nodes-ready script
* Simplify logic to retry application of kubernetes config if failed
* Revert file permission change
* allow creating an IAM role for each worker group
* moved change from 'changed' to 'added'
* create multiple roles not just profiles
* fix config_map_aws_auth generation
* don't duplicate worker-role templating
* specify ARNs for worker groups individually
todo fix aws_auth configmap
* fixed AWS auth
* fix aws_iam_instance_profile.workers name
fix iam_instance_profile fallback
* fix outputs
* fix iam_instance_profile calculation
* hopefully fix aws auth configmap generation
* manually fill out remainder of arn
* remove depends_on in worker_role_arns template file
this was causing resources to be recreated every time
* fmt
* fix typo, move iam_role_id default to defaults map