* Adding minimum communication
The docs at https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html specify that port 10250 is needed at a minimum for communication between the control plane, and the worker nodes. If you specify a `worker_sg_ingress_from_port` as something like `30000`, then this minimum communication is never established.
* Adding description to CHANGELOG.md
* Adjusting the naming of the resources
* Ensuring creation is conditional on the value of `worker_sg_ingress_from_port`
* Mistake, should be greater than port 10250
Example usage : we want our nodes to be able to update route53 record
for using external-dns.
```hcl
data "template_file" "eks_worker_additional_route53_policy" {
template = "${file("iam/route53_policy.json.tpl")}"
}
resource "aws_iam_policy" "eks_worker_additional_route53_policy" {
description = "Allow nodes to update our zone"
name = "${module.k8s_cluster01_label.id}-additional-route53-policy"
policy = "${data.template_file.eks_worker_additional_route53_policy.rendered}"
}
```
which defines the policy; then in the EKS module :
```hcl
module "cluster01" {
cluster_name = "cluster01"
<snip>
workers_addtional_policies = [
"${aws_iam_policy.eks_worker_additional_route53_policy.arn}"
]
workers_addtional_policies_count = 1
<snip>
```
This enables attaching additional policies, e.g. for using
encrypted volumes, to the cluster.
Signed-off-by: Steffen Pingel <steffen.pingel@tasktop.com>
* Added update aws auth configmap when manage_aws_auth set false case
and `write_aws_auth_config` variable for not create the aws_auth files option
* Add CHANGELOG
* Changed writing config file process for Windows compatibility.
* Apply terraform-docs and terraform fmt
* Fixed zsh-specific syntax
* Fixed CHANGELOG.md
* Allow per worker group ASG tags to be set
* Format
* Set correct defaults
* Implement hack that will use the first item in the list if a matching item does not exist for the worker group
* Use a map that will map from the worker group name to the tags to get around the issue where list indexing does not work with a list of lists
* Format
* Cleanup
* Fix sample
* README
If you are trying to recover a cluster that was deleted, the current
code will not re-apply the ConfigMap because it is already rendered so
kubectl command won't get triggered.
This change adds the cluster endpoint (which should be different when
spinning up a new cluster even with the same name) so we will force a
re-render and cause the kubectl command to run.