Merge pull request #32 from terraform-aws-modules/release/1.1.0

releasing 1.1.0
This commit is contained in:
Brandon J. O'Connor
2018-06-25 01:26:49 -07:00
committed by GitHub
5 changed files with 55 additions and 27 deletions

View File

@@ -5,11 +5,21 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/) and this The format is based on [Keep a Changelog](http://keepachangelog.com/) and this
project adheres to [Semantic Versioning](http://semver.org/). project adheres to [Semantic Versioning](http://semver.org/).
## [[v1.0.1](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v1.0.0...v1.0.1)] - 2018-06-23] ## [[v1.1.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v1.0.0...v1.1.0)] - 2018-06-25]
### Added ### Added
- new variable `worker_sg_ingress_from_port` allows to change the minimum port number from which pods will accept communication - new variable `worker_sg_ingress_from_port` allows to change the minimum port number from which pods will accept communication (Thanks, @ilyasotkov 👏).
- expanded on worker example to show how multiple worker autoscaling groups can be created.
- IPv4 is used explicitly to resolve testing from IPv6 networks (thanks, @tsub 🙏).
- Configurable public IP attachment and ssh keys for worker groups. Defaults defined in `worker_group_defaults`. Nice, @hatemosphere 🌂
- `worker_iam_role_name` now an output. Sweet, @artursmet 🕶️
### Changed
- IAM test role repaired by @lcharkiewicz 💅
- `kube-proxy` restart no longer needed in userdata. Good catch, @hatemosphere 🔥
- worker ASG reattachment wasn't possible when using `name`. Moved to `name_prefix` to allow recreation of resources. Kudos again, @hatemosphere 🐧
## [[v1.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.2.0...v1.0.0)] - 2018-06-11] ## [[v1.0.0](https://github.com/terraform-aws-modules/terraform-aws-eks/compare/v0.2.0...v1.0.0)] - 2018-06-11]

View File

@@ -15,6 +15,7 @@ Read the [AWS docs on EKS to get connected to the k8s dashboard](https://docs.aw
- You want to create an EKS cluster and an autoscaling group of workers for the cluster. - You want to create an EKS cluster and an autoscaling group of workers for the cluster.
- You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module. - You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
- You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources. - You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources.
- If using the default variable value (`true`) for `configure_kubectl_session`, it's required that both [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) (>=1.10) and [`heptio-authenticator-aws`](https://github.com/heptio/authenticator#4-set-up-kubectl-to-use-heptio-authenticator-for-aws-tokens) are installed and on your shell's PATH.
## Usage example ## Usage example
@@ -31,11 +32,11 @@ module "eks" {
} }
``` ```
## Dependencies ## Release schedule
The `configure_kubectl_session` variable requires that both [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) Generally the maintainers will try to release the module once every 2 weeks to
(>=1.10) and [`heptio-authenticator-aws`](https://github.com/heptio/authenticator#4-set-up-kubectl-to-use-heptio-authenticator-for-aws-tokens) keep up with PR additions. If particularly pressing changes are added or maintainers
are installed and on your shell's PATH. come up with the spare time (hah!), release may happen more often on occasion.
## Testing ## Testing
@@ -92,20 +93,20 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
## Inputs ## Inputs
| Name | Description | Type | Default | Required | | Name | Description | Type | Default | Required |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :----: | :------: | :------: | | --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :----: | :------: | :------: |
| cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | - | yes | | cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | string | - | yes |
| cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no | | cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. | string | `` | no |
| cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no | | cluster_version | Kubernetes version to use for the EKS cluster. | string | `1.10` | no |
| config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. | string | `./` | no | | config_output_path | Determines where config files are placed if using configure_kubectl_session and you want config files to land outside the current working directory. | string | `./` | no |
| configure_kubectl_session | Configure the current session's kubectl to use the instantiated EKS cluster. | string | `true` | no | | configure_kubectl_session | Configure the current session's kubectl to use the instantiated EKS cluster. | string | `true` | no |
| subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes | | subnets | A list of subnets to place the EKS cluster and workers within. | list | - | yes |
| tags | A map of tags to add to all resources. | string | `<map>` | no | | tags | A map of tags to add to all resources. | string | `<map>` | no |
| vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes | | vpc_id | VPC where the cluster and workers will be deployed. | string | - | yes |
| worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no | | worker_groups | A list of maps defining worker group configurations. See workers_group_defaults for valid keys. | list | `<list>` | no |
| worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no | | worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. | string | `` | no |
| worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | string | `1025` | no | | worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | string | `1025` | no |
| workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no | | workers_group_defaults | Default values for target groups as defined by the list of maps. | map | `<map>` | no |
## Outputs ## Outputs
@@ -118,6 +119,6 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
| cluster_version | The Kubernetes server version for the EKS cluster. | | cluster_version | The Kubernetes server version for the EKS cluster. |
| config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. | | config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster. |
| kubeconfig | kubectl config file contents for this EKS cluster. | | kubeconfig | kubectl config file contents for this EKS cluster. |
| worker_iam_role_name | IAM role name attached to EKS workers |
| worker_security_group_id | Security group ID attached to the EKS workers. | | worker_security_group_id | Security group ID attached to the EKS workers. |
| workers_asg_arns | IDs of the autoscaling groups containing workers. | | workers_asg_arns | IDs of the autoscaling groups containing workers. |
| worker_iam_role_name | IAM role name attached to EKS workers. |

View File

@@ -16,12 +16,28 @@ data "aws_availability_zones" "available" {}
locals { locals {
cluster_name = "test-eks-${random_string.suffix.result}" cluster_name = "test-eks-${random_string.suffix.result}"
# the commented out worker group list below shows an example of how to define
# multiple worker groups of differing configurations
# worker_groups = "${list(
# map("asg_desired_capacity", "2",
# "asg_max_size", "10",
# "asg_min_size", "2",
# "instance_type", "m4.xlarge",
# "name", "worker_group_a",
# ),
# map("asg_desired_capacity", "1",
# "asg_max_size", "5",
# "asg_min_size", "1",
# "instance_type", "m4.2xlarge",
# "name", "worker_group_b",
# ),
# )}"
worker_groups = "${list( worker_groups = "${list(
map("instance_type","t2.small", map("instance_type","t2.small",
"additional_userdata","echo foo bar" "additional_userdata","echo foo bar"
), ),
)}" )}"
tags = "${map("Environment", "test", tags = "${map("Environment", "test",
"GithubRepo", "terraform-aws-eks", "GithubRepo", "terraform-aws-eks",
"GithubOrg", "terraform-aws-modules", "GithubOrg", "terraform-aws-modules",

View File

@@ -16,6 +16,7 @@
** You want to create an EKS cluster and an autoscaling group of workers for the cluster. ** You want to create an EKS cluster and an autoscaling group of workers for the cluster.
** You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module. ** You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
** You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources. ** You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources.
** If using the default variable value (`true`) for `configure_kubectl_session`, it's required that both [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) (>=1.10) and [`heptio-authenticator-aws`](https://github.com/heptio/authenticator#4-set-up-kubectl-to-use-heptio-authenticator-for-aws-tokens) are installed and on your shell's PATH.
* ## Usage example * ## Usage example
@@ -32,11 +33,11 @@
* } * }
* ``` * ```
* ## Dependencies * ## Release schedule
* The `configure_kubectl_session` variable requires that both [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) * Generally the maintainers will try to release the module once every 2 weeks to
(>=1.10) and [`heptio-authenticator-aws`](https://github.com/heptio/authenticator#4-set-up-kubectl-to-use-heptio-authenticator-for-aws-tokens) * keep up with PR additions. If particularly pressing changes are added or maintainers
are installed and on your shell's PATH. * come up with the spare time (hah!), release may happen more often on occasion.
* ## Testing * ## Testing

View File

@@ -1 +1 @@
v1.0.1 v1.1.0