mirror of
https://github.com/ysoftdevs/terraform-aws-eks.git
synced 2026-03-22 01:19:05 +01:00
docs: Re-organize documentation for easier navigation and support for references in issues/PRs (#1981)
This commit is contained in:
12
docs/README.md
Normal file
12
docs/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Documentation
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Frequently Asked Questions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md)
|
||||
- [Compute Resources](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/compute_resources.md)
|
||||
- [IRSA Integration](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/irsa-integration.md)
|
||||
- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md)
|
||||
- [Network Connectivity](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md)
|
||||
- Upgrade Guides
|
||||
- [Upgrade to v17.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-17.0.md)
|
||||
- [Upgrade to v18.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-18.0.md)
|
||||
65
docs/UPGRADE-17.0.md
Normal file
65
docs/UPGRADE-17.0.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# How to handle the terraform-aws-eks module upgrade
|
||||
|
||||
## Upgrade module to v17.0.0 for Managed Node Groups
|
||||
|
||||
In this release, we now decided to remove random_pet resources in Managed Node Groups (MNG). Those were used to recreate MNG if something changed. But they were causing a lot of issues. To upgrade the module without recreating your MNG, you will need to explicitly reuse their previous name and set them in your MNG `name` argument.
|
||||
|
||||
1. Run `terraform apply` with the module version v16.2.0
|
||||
2. Get your worker group names
|
||||
|
||||
```shell
|
||||
~ terraform state show 'module.eks.module.node_groups.aws_eks_node_group.workers["example"]' | grep node_group_name
|
||||
node_group_name = "test-eks-mwIwsvui-example-sincere-squid"
|
||||
```
|
||||
|
||||
3. Upgrade your module and configure your node groups to use existing names
|
||||
|
||||
```hcl
|
||||
module "eks" {
|
||||
source = "terraform-aws-modules/eks/aws"
|
||||
version = "17.0.0"
|
||||
|
||||
cluster_name = "test-eks-mwIwsvui"
|
||||
cluster_version = "1.20"
|
||||
# ...
|
||||
|
||||
node_groups = {
|
||||
example = {
|
||||
name = "test-eks-mwIwsvui-example-sincere-squid"
|
||||
|
||||
# ...
|
||||
}
|
||||
}
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
4. Run `terraform plan`, you shoud see that only `random_pets` will be destroyed
|
||||
|
||||
```shell
|
||||
Terraform will perform the following actions:
|
||||
|
||||
# module.eks.module.node_groups.random_pet.node_groups["example"] will be destroyed
|
||||
- resource "random_pet" "node_groups" {
|
||||
- id = "sincere-squid" -> null
|
||||
- keepers = {
|
||||
- "ami_type" = "AL2_x86_64"
|
||||
- "capacity_type" = "SPOT"
|
||||
- "disk_size" = "50"
|
||||
- "iam_role_arn" = "arn:aws:iam::123456789123:role/test-eks-mwIwsvui20210527220853611600000009"
|
||||
- "instance_types" = "t3.large"
|
||||
- "key_name" = ""
|
||||
- "node_group_name" = "test-eks-mwIwsvui-example"
|
||||
- "source_security_group_ids" = ""
|
||||
- "subnet_ids" = "subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx|subnet-xxxxxxxxxxxx"
|
||||
} -> null
|
||||
- length = 2 -> null
|
||||
- separator = "-" -> null
|
||||
}
|
||||
|
||||
Plan: 0 to add, 0 to change, 1 to destroy.
|
||||
```
|
||||
|
||||
5. If everything sounds good to you, run `terraform apply`
|
||||
|
||||
After the first apply, we recommand you to create a new node group and let the module use the `node_group_name_prefix` (by removing the `name` argument) to generate names and avoid collision during node groups re-creation if needed, because the lifce cycle is `create_before_destroy = true`.
|
||||
588
docs/UPGRADE-18.0.md
Normal file
588
docs/UPGRADE-18.0.md
Normal file
@@ -0,0 +1,588 @@
|
||||
# Upgrade from v17.x to v18.x
|
||||
|
||||
Please consult the `examples` directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.
|
||||
|
||||
Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared their steps/information for their individual configurations. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all and this has been a very helpful issue for others to share they were able to upgrade.
|
||||
|
||||
## List of backwards incompatible changes
|
||||
|
||||
- Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state [`We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates.`](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html)
|
||||
- Support for managing aws-auth configmap has been removed. This change also removes the dependency on the Kubernetes Terraform provider, the local dependency on aws-iam-authenticator for users, as well as the reliance on the forked http provider to wait and poll on cluster creation. To aid users in this change, an output variable `aws_auth_configmap_yaml` has been provided which renders the aws-auth configmap necessary to support at least the IAM roles used by the module (additional mapRoles/mapUsers definitions to be provided by users)
|
||||
- Support for managing kubeconfig and its associated `local_file` resources have been removed; users are able to use the awscli provided `aws eks update-kubeconfig --name <cluster_name>` to update their local kubeconfig as necessary
|
||||
- The terminology used in the module has been modified to reflect that used by the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html).
|
||||
- [AWS EKS Managed Node Group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html), `eks_managed_node_groups`, was previously referred to as simply node group, `node_groups`
|
||||
- [Self Managed Node Group Group](https://docs.aws.amazon.com/eks/latest/userguide/worker.html), `self_managed_node_groups`, was previously referred to as worker group, `worker_groups`
|
||||
- [AWS Fargate Profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html), `fargate_profiles`, remains unchanged in terms of naming and terminology
|
||||
- The three different node group types supported by AWS and the module have been refactored into standalone sub-modules that are both used by the root `eks` module as well as available for individual, standalone consumption if desired.
|
||||
- The previous `node_groups` sub-module is now named `eks-managed-node-group` and provisions a single AWS EKS Managed Node Group per sub-module definition (previous version utilized `for_each` to create 0 or more node groups)
|
||||
- Additional changes for the `eks-managed-node-group` sub-module over the previous `node_groups` module include:
|
||||
- Variable name changes defined in section `Variable and output changes` below
|
||||
- Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
|
||||
- Support for nearly full control of the security group created, or provide the ID of an existing security group, has been added
|
||||
- User data has been revamped and all user data logic moved to the `_user_data` internal sub-module; the local `userdata.sh.tpl` has been removed entirely
|
||||
- The previous `fargate` sub-module is now named `fargate-profile` and provisions a single AWS EKS Fargate Profile per sub-module definition (previous version utilized `for_each` to create 0 or more profiles)
|
||||
- Additional changes for the `fargate-profile` sub-module over the previous `fargate` module include:
|
||||
- Variable name changes defined in section `Variable and output changes` below
|
||||
- Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
|
||||
- Similar to the `eks_managed_node_group_defaults` and `self_managed_node_group_defaults`, a `fargate_profile_defaults` has been provided to allow users to control the default configurations for the Fargate profiles created
|
||||
- A sub-module for `self-managed-node-group` has been created and provisions a single self managed node group (autoscaling group) per sub-module definition
|
||||
- Additional changes for the `self-managed-node-group` sub-module over the previous `node_groups` variable include:
|
||||
- The underlying autoscaling group and launch template have been updated to more closely match that of the [`terraform-aws-autoscaling`](https://github.com/terraform-aws-modules/terraform-aws-autoscaling) module and the features it offers
|
||||
- The previous iteration used a count over a list of node group definitions which was prone to disruptive updates; this is now replaced with a map/for_each to align with that of the EKS managed node group and Fargate profile behaviors/style
|
||||
- The user data configuration supported across the module has been completely revamped. A new `_user_data` internal sub-module has been created to consolidate all user data configuration in one location which provides better support for testability (via the [`examples/user_data`](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) example). The new sub-module supports nearly all possible combinations including the ability to allow users to provide their own user data template which will be rendered by the module. See the `examples/user_data` example project for the full plethora of example configuration possibilities and more details on the logic of the design can be found in the [`modules/_user_data`](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data_) directory.
|
||||
- Resource name changes may cause issues with existing resources. For example, security groups and IAM roles cannot be renamed, they must be recreated. Recreation of these resources may also trigger a recreation of the cluster. To use the legacy (< 18.x) resource naming convention, set `prefix_separator` to "".
|
||||
- Security group usage has been overhauled to provide only the bare minimum network connectivity required to launch a bare bones cluster. See the [security group documentation section](https://github.com/terraform-aws-modules/terraform-aws-eks#security-groups) for more details. Users upgrading to v18.x will want to review the rules they have in place today versus the rules provisioned by the v18.x module and ensure to make any necessary adjustments for their specific workload.
|
||||
|
||||
## Additional changes
|
||||
|
||||
### Added
|
||||
|
||||
- Support for AWS EKS Addons has been added
|
||||
- Support for AWS EKS Cluster Identity Provider Configuration has been added
|
||||
- AWS Terraform provider minimum required version has been updated to 3.64 to support the changes made and additional resources supported
|
||||
- An example `user_data` project has been added to aid in demonstrating, testing, and validating the various methods of configuring user data with the `_user_data` sub-module as well as the root `eks` module
|
||||
- Template for rendering the aws-auth configmap output - `aws_auth_cm.tpl`
|
||||
- Template for Bottlerocket OS user data bootstrapping - `bottlerocket_user_data.tpl`
|
||||
|
||||
### Modified
|
||||
|
||||
- The previous `fargate` example has been renamed to `fargate_profile`
|
||||
- The previous `irsa` and `instance_refresh` examples have been merged into one example `irsa_autoscale_refresh`
|
||||
- The previous `managed_node_groups` example has been renamed to `self_managed_node_group`
|
||||
- The previously hardcoded EKS OIDC root CA thumbprint value and variable has been replaced with a `tls_certificate` data source that refers to the cluster OIDC issuer url. Thumbprint values should remain unchanged however
|
||||
- Individual cluster security group resources have been replaced with a single security group resource that takes a map of rules as input. The default ingress/egress rules have had their scope reduced in order to provide the bare minimum of access to permit successful cluster creation and allow users to opt in to any additional network access as needed for a better security posture. This means the `0.0.0.0/0` egress rule has been removed, instead TCP/443 and TCP/10250 egress rules to the node group security group are used instead
|
||||
- The Linux/bash user data template has been updated to include the bare minimum necessary for bootstrapping AWS EKS Optimized AMI derivative nodes with provisions for providing additional user data and configurations; was named `userdata.sh.tpl` and is now named `linux_user_data.tpl`
|
||||
- The Windows user data template has been renamed from `userdata_windows.tpl` to `windows_user_data.tpl`
|
||||
|
||||
### Removed
|
||||
|
||||
- Miscellaneous documents on how to configure Kubernetes cluster internals have been removed. Documentation related to how to configure the AWS EKS Cluster and its supported infrastructure resources provided by the module are supported, while cluster internal configuration is out of scope for this project
|
||||
- The previous `bottlerocket` example has been removed in favor of demonstrating the use and configuration of Bottlerocket nodes via the respective `eks_managed_node_group` and `self_managed_node_group` examples
|
||||
- The previous `launch_template` and `launch_templates_with_managed_node_groups` examples have been removed; only launch templates are now supported (default) and launch configuration support has been removed
|
||||
- The previous `secrets_encryption` example has been removed; the functionality has been demonstrated in several of the new examples rendering this standalone example redundant
|
||||
- The additional, custom IAM role policy for the cluster role has been removed. The permissions are either now provided in the attached managed AWS permission policies used or are no longer required
|
||||
- The `kubeconfig.tpl` template; kubeconfig management is no longer supported under this module
|
||||
- The HTTP Terraform provider (forked copy) dependency has been removed
|
||||
|
||||
### Variable and output changes
|
||||
|
||||
1. Removed variables:
|
||||
|
||||
- `cluster_create_timeout`, `cluster_update_timeout`, and `cluster_delete_timeout` have been replaced with `cluster_timeouts`
|
||||
- `kubeconfig_name`
|
||||
- `kubeconfig_output_path`
|
||||
- `kubeconfig_file_permission`
|
||||
- `kubeconfig_api_version`
|
||||
- `kubeconfig_aws_authenticator_command`
|
||||
- `kubeconfig_aws_authenticator_command_args`
|
||||
- `kubeconfig_aws_authenticator_additional_args`
|
||||
- `kubeconfig_aws_authenticator_env_variables`
|
||||
- `write_kubeconfig`
|
||||
- `default_platform`
|
||||
- `manage_aws_auth`
|
||||
- `aws_auth_additional_labels`
|
||||
- `map_accounts`
|
||||
- `map_roles`
|
||||
- `map_users`
|
||||
- `fargate_subnets`
|
||||
- `worker_groups_launch_template`
|
||||
- `worker_security_group_id`
|
||||
- `worker_ami_name_filter`
|
||||
- `worker_ami_name_filter_windows`
|
||||
- `worker_ami_owner_id`
|
||||
- `worker_ami_owner_id_windows`
|
||||
- `worker_additional_security_group_ids`
|
||||
- `worker_sg_ingress_from_port`
|
||||
- `workers_additional_policies`
|
||||
- `worker_create_security_group`
|
||||
- `worker_create_initial_lifecycle_hooks`
|
||||
- `worker_create_cluster_primary_security_group_rules`
|
||||
- `cluster_create_endpoint_private_access_sg_rule`
|
||||
- `cluster_endpoint_private_access_cidrs`
|
||||
- `cluster_endpoint_private_access_sg`
|
||||
- `manage_worker_iam_resources`
|
||||
- `workers_role_name`
|
||||
- `attach_worker_cni_policy`
|
||||
- `eks_oidc_root_ca_thumbprint`
|
||||
- `create_fargate_pod_execution_role`
|
||||
- `fargate_pod_execution_role_name`
|
||||
- `cluster_egress_cidrs`
|
||||
- `workers_egress_cidrs`
|
||||
- `wait_for_cluster_timeout`
|
||||
- EKS Managed Node Group sub-module (was `node_groups`)
|
||||
- `default_iam_role_arn`
|
||||
- `workers_group_defaults`
|
||||
- `worker_security_group_id`
|
||||
- `node_groups_defaults`
|
||||
- `node_groups`
|
||||
- `ebs_optimized_not_supported`
|
||||
- Fargate profile sub-module (was `fargate`)
|
||||
- `create_eks` and `create_fargate_pod_execution_role` have been replaced with simply `create`
|
||||
|
||||
2. Renamed variables:
|
||||
|
||||
- `create_eks` -> `create`
|
||||
- `subnets` -> `subnet_ids`
|
||||
- `cluster_create_security_group` -> `create_cluster_security_group`
|
||||
- `cluster_log_retention_in_days` -> `cloudwatch_log_group_retention_in_days`
|
||||
- `cluster_log_kms_key_id` -> `cloudwatch_log_group_kms_key_id`
|
||||
- `manage_cluster_iam_resources` -> `create_iam_role`
|
||||
- `cluster_iam_role_name` -> `iam_role_name`
|
||||
- `permissions_boundary` -> `iam_role_permissions_boundary`
|
||||
- `iam_path` -> `iam_role_path`
|
||||
- `pre_userdata` -> `pre_bootstrap_user_data`
|
||||
- `additional_userdata` -> `post_bootstrap_user_data`
|
||||
- `worker_groups` -> `self_managed_node_groups`
|
||||
- `workers_group_defaults` -> `self_managed_node_group_defaults`
|
||||
- `node_groups` -> `eks_managed_node_groups`
|
||||
- `node_groups_defaults` -> `eks_managed_node_group_defaults`
|
||||
- EKS Managed Node Group sub-module (was `node_groups`)
|
||||
- `create_eks` -> `create`
|
||||
- `worker_additional_security_group_ids` -> `vpc_security_group_ids`
|
||||
- Fargate profile sub-module
|
||||
- `fargate_pod_execution_role_name` -> `name`
|
||||
- `create_fargate_pod_execution_role` -> `create_iam_role`
|
||||
- `subnets` -> `subnet_ids`
|
||||
- `iam_path` -> `iam_role_path`
|
||||
- `permissions_boundary` -> `iam_role_permissions_boundary`
|
||||
|
||||
3. Added variables:
|
||||
|
||||
- `cluster_additional_security_group_ids` added to allow users to add additional security groups to the cluster as needed
|
||||
- `cluster_security_group_name`
|
||||
- `cluster_security_group_use_name_prefix` added to allow users to use either the name as specified or default to using the name specified as a prefix
|
||||
- `cluster_security_group_description`
|
||||
- `cluster_security_group_additional_rules`
|
||||
- `cluster_security_group_tags`
|
||||
- `create_cloudwatch_log_group` added in place of the logic that checked if any cluster log types were enabled to allow users to opt in as they see fit
|
||||
- `create_node_security_group` added to create single security group that connects node groups and cluster in central location
|
||||
- `node_security_group_id`
|
||||
- `node_security_group_name`
|
||||
- `node_security_group_use_name_prefix`
|
||||
- `node_security_group_description`
|
||||
- `node_security_group_additional_rules`
|
||||
- `node_security_group_tags`
|
||||
- `iam_role_arn`
|
||||
- `iam_role_use_name_prefix`
|
||||
- `iam_role_description`
|
||||
- `iam_role_additional_policies`
|
||||
- `iam_role_tags`
|
||||
- `cluster_addons`
|
||||
- `cluster_identity_providers`
|
||||
- `fargate_profile_defaults`
|
||||
- `prefix_separator` added to support legacy behavior of not having a prefix separator
|
||||
- EKS Managed Node Group sub-module (was `node_groups`)
|
||||
- `platform`
|
||||
- `enable_bootstrap_user_data`
|
||||
- `pre_bootstrap_user_data`
|
||||
- `post_bootstrap_user_data`
|
||||
- `bootstrap_extra_args`
|
||||
- `user_data_template_path`
|
||||
- `create_launch_template`
|
||||
- `launch_template_name`
|
||||
- `launch_template_use_name_prefix`
|
||||
- `description`
|
||||
- `ebs_optimized`
|
||||
- `ami_id`
|
||||
- `key_name`
|
||||
- `launch_template_default_version`
|
||||
- `update_launch_template_default_version`
|
||||
- `disable_api_termination`
|
||||
- `kernel_id`
|
||||
- `ram_disk_id`
|
||||
- `block_device_mappings`
|
||||
- `capacity_reservation_specification`
|
||||
- `cpu_options`
|
||||
- `credit_specification`
|
||||
- `elastic_gpu_specifications`
|
||||
- `elastic_inference_accelerator`
|
||||
- `enclave_options`
|
||||
- `instance_market_options`
|
||||
- `license_specifications`
|
||||
- `metadata_options`
|
||||
- `enable_monitoring`
|
||||
- `network_interfaces`
|
||||
- `placement`
|
||||
- `min_size`
|
||||
- `max_size`
|
||||
- `desired_size`
|
||||
- `use_name_prefix`
|
||||
- `ami_type`
|
||||
- `ami_release_version`
|
||||
- `capacity_type`
|
||||
- `disk_size`
|
||||
- `force_update_version`
|
||||
- `instance_types`
|
||||
- `labels`
|
||||
- `cluster_version`
|
||||
- `launch_template_version`
|
||||
- `remote_access`
|
||||
- `taints`
|
||||
- `update_config`
|
||||
- `timeouts`
|
||||
- `create_security_group`
|
||||
- `security_group_name`
|
||||
- `security_group_use_name_prefix`
|
||||
- `security_group_description`
|
||||
- `vpc_id`
|
||||
- `security_group_rules`
|
||||
- `cluster_security_group_id`
|
||||
- `security_group_tags`
|
||||
- `create_iam_role`
|
||||
- `iam_role_arn`
|
||||
- `iam_role_name`
|
||||
- `iam_role_use_name_prefix`
|
||||
- `iam_role_path`
|
||||
- `iam_role_description`
|
||||
- `iam_role_permissions_boundary`
|
||||
- `iam_role_additional_policies`
|
||||
- `iam_role_tags`
|
||||
- Fargate profile sub-module (was `fargate`)
|
||||
- `iam_role_arn` (for if `create_iam_role` is `false` to bring your own externally created role)
|
||||
- `iam_role_name`
|
||||
- `iam_role_use_name_prefix`
|
||||
- `iam_role_description`
|
||||
- `iam_role_additional_policies`
|
||||
- `iam_role_tags`
|
||||
- `selectors`
|
||||
- `timeouts`
|
||||
|
||||
4. Removed outputs:
|
||||
|
||||
- `cluster_version`
|
||||
- `kubeconfig`
|
||||
- `kubeconfig_filename`
|
||||
- `workers_asg_arns`
|
||||
- `workers_asg_names`
|
||||
- `workers_user_data`
|
||||
- `workers_default_ami_id`
|
||||
- `workers_default_ami_id_windows`
|
||||
- `workers_launch_template_ids`
|
||||
- `workers_launch_template_arns`
|
||||
- `workers_launch_template_latest_versions`
|
||||
- `worker_security_group_id`
|
||||
- `worker_iam_instance_profile_arns`
|
||||
- `worker_iam_instance_profile_names`
|
||||
- `worker_iam_role_name`
|
||||
- `worker_iam_role_arn`
|
||||
- `fargate_profile_ids`
|
||||
- `fargate_profile_arns`
|
||||
- `fargate_iam_role_name`
|
||||
- `fargate_iam_role_arn`
|
||||
- `node_groups`
|
||||
- `security_group_rule_cluster_https_worker_ingress`
|
||||
- EKS Managed Node Group sub-module (was `node_groups`)
|
||||
- `node_groups`
|
||||
- `aws_auth_roles`
|
||||
- Fargate profile sub-module (was `fargate`)
|
||||
- `aws_auth_roles`
|
||||
|
||||
5. Renamed outputs:
|
||||
|
||||
- `config_map_aws_auth` -> `aws_auth_configmap_yaml`
|
||||
- Fargate profile sub-module (was `fargate`)
|
||||
- `fargate_profile_ids` -> `fargate_profile_id`
|
||||
- `fargate_profile_arns` -> `fargate_profile_arn`
|
||||
|
||||
6. Added outputs:
|
||||
|
||||
- `cluster_platform_version`
|
||||
- `cluster_status`
|
||||
- `cluster_security_group_arn`
|
||||
- `cluster_security_group_id`
|
||||
- `node_security_group_arn`
|
||||
- `node_security_group_id`
|
||||
- `cluster_iam_role_unique_id`
|
||||
- `cluster_addons`
|
||||
- `cluster_identity_providers`
|
||||
- `fargate_profiles`
|
||||
- `eks_managed_node_groups`
|
||||
- `self_managed_node_groups`
|
||||
- EKS Managed Node Group sub-module (was `node_groups`)
|
||||
- `launch_template_id`
|
||||
- `launch_template_arn`
|
||||
- `launch_template_latest_version`
|
||||
- `node_group_arn`
|
||||
- `node_group_id`
|
||||
- `node_group_resources`
|
||||
- `node_group_status`
|
||||
- `security_group_arn`
|
||||
- `security_group_id`
|
||||
- `iam_role_name`
|
||||
- `iam_role_arn`
|
||||
- `iam_role_unique_id`
|
||||
- Fargate profile sub-module (was `fargate`)
|
||||
- `iam_role_unique_id`
|
||||
- `fargate_profile_status`
|
||||
|
||||
## Upgrade Migrations
|
||||
|
||||
### Before 17.x Example
|
||||
|
||||
```hcl
|
||||
module "eks" {
|
||||
source = "terraform-aws-modules/eks/aws"
|
||||
version = "~> 17.0"
|
||||
|
||||
cluster_name = local.name
|
||||
cluster_version = local.cluster_version
|
||||
cluster_endpoint_private_access = true
|
||||
cluster_endpoint_public_access = true
|
||||
|
||||
vpc_id = module.vpc.vpc_id
|
||||
subnets = module.vpc.private_subnets
|
||||
|
||||
# Managed Node Groups
|
||||
node_groups_defaults = {
|
||||
ami_type = "AL2_x86_64"
|
||||
disk_size = 50
|
||||
}
|
||||
|
||||
node_groups = {
|
||||
node_group = {
|
||||
min_capacity = 1
|
||||
max_capacity = 10
|
||||
desired_capacity = 1
|
||||
|
||||
instance_types = ["t3.large"]
|
||||
capacity_type = "SPOT"
|
||||
|
||||
update_config = {
|
||||
max_unavailable_percentage = 50
|
||||
}
|
||||
|
||||
k8s_labels = {
|
||||
Environment = "test"
|
||||
GithubRepo = "terraform-aws-eks"
|
||||
GithubOrg = "terraform-aws-modules"
|
||||
}
|
||||
|
||||
taints = [
|
||||
{
|
||||
key = "dedicated"
|
||||
value = "gpuGroup"
|
||||
effect = "NO_SCHEDULE"
|
||||
}
|
||||
]
|
||||
|
||||
additional_tags = {
|
||||
ExtraTag = "example"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Worker groups
|
||||
worker_additional_security_group_ids = [aws_security_group.additional.id]
|
||||
|
||||
worker_groups_launch_template = [
|
||||
{
|
||||
name = "worker-group"
|
||||
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]
|
||||
spot_instance_pools = 4
|
||||
asg_max_size = 5
|
||||
asg_desired_capacity = 2
|
||||
kubelet_extra_args = "--node-labels=node.kubernetes.io/lifecycle=spot"
|
||||
public_ip = true
|
||||
},
|
||||
]
|
||||
|
||||
# Fargate
|
||||
fargate_profiles = {
|
||||
default = {
|
||||
name = "default"
|
||||
selectors = [
|
||||
{
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "kube-dns"
|
||||
}
|
||||
},
|
||||
{
|
||||
namespace = "default"
|
||||
}
|
||||
]
|
||||
|
||||
tags = {
|
||||
Owner = "test"
|
||||
}
|
||||
|
||||
timeouts = {
|
||||
create = "20m"
|
||||
delete = "20m"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tags = {
|
||||
Environment = "test"
|
||||
GithubRepo = "terraform-aws-eks"
|
||||
GithubOrg = "terraform-aws-modules"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### After 18.x Example
|
||||
|
||||
```hcl
|
||||
module "cluster_after" {
|
||||
source = "terraform-aws-modules/eks/aws"
|
||||
version = "~> 18.0"
|
||||
|
||||
cluster_name = local.name
|
||||
cluster_version = local.cluster_version
|
||||
cluster_endpoint_private_access = true
|
||||
cluster_endpoint_public_access = true
|
||||
|
||||
vpc_id = module.vpc.vpc_id
|
||||
subnet_ids = module.vpc.private_subnets
|
||||
|
||||
eks_managed_node_group_defaults = {
|
||||
ami_type = "AL2_x86_64"
|
||||
disk_size = 50
|
||||
}
|
||||
|
||||
eks_managed_node_groups = {
|
||||
node_group = {
|
||||
min_size = 1
|
||||
max_size = 10
|
||||
desired_size = 1
|
||||
|
||||
instance_types = ["t3.large"]
|
||||
capacity_type = "SPOT"
|
||||
|
||||
update_config = {
|
||||
max_unavailable_percentage = 50
|
||||
}
|
||||
|
||||
labels = {
|
||||
Environment = "test"
|
||||
GithubRepo = "terraform-aws-eks"
|
||||
GithubOrg = "terraform-aws-modules"
|
||||
}
|
||||
|
||||
taints = [
|
||||
{
|
||||
key = "dedicated"
|
||||
value = "gpuGroup"
|
||||
effect = "NO_SCHEDULE"
|
||||
}
|
||||
]
|
||||
|
||||
tags = {
|
||||
ExtraTag = "example"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self_managed_node_group_defaults = {
|
||||
vpc_security_group_ids = [aws_security_group.additional.id]
|
||||
}
|
||||
|
||||
self_managed_node_groups = {
|
||||
worker_group = {
|
||||
name = "worker-group"
|
||||
|
||||
min_size = 1
|
||||
max_size = 5
|
||||
desired_size = 2
|
||||
instance_type = "m4.large"
|
||||
|
||||
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
|
||||
|
||||
block_device_mappings = {
|
||||
xvda = {
|
||||
device_name = "/dev/xvda"
|
||||
ebs = {
|
||||
delete_on_termination = true
|
||||
encrypted = false
|
||||
volume_size = 100
|
||||
volume_type = "gp2"
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
use_mixed_instances_policy = true
|
||||
mixed_instances_policy = {
|
||||
instances_distribution = {
|
||||
spot_instance_pools = 4
|
||||
}
|
||||
|
||||
override = [
|
||||
{ instance_type = "m5.large" },
|
||||
{ instance_type = "m5a.large" },
|
||||
{ instance_type = "m5d.large" },
|
||||
{ instance_type = "m5ad.large" },
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Fargate
|
||||
fargate_profiles = {
|
||||
default = {
|
||||
name = "default"
|
||||
|
||||
selectors = [
|
||||
{
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "kube-dns"
|
||||
}
|
||||
},
|
||||
{
|
||||
namespace = "default"
|
||||
}
|
||||
]
|
||||
|
||||
tags = {
|
||||
Owner = "test"
|
||||
}
|
||||
|
||||
timeouts = {
|
||||
create = "20m"
|
||||
delete = "20m"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tags = {
|
||||
Environment = "test"
|
||||
GithubRepo = "terraform-aws-eks"
|
||||
GithubOrg = "terraform-aws-modules"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Attaching an IAM role policy to a Fargate profile
|
||||
|
||||
#### Before 17.x
|
||||
|
||||
```hcl
|
||||
resource "aws_iam_role_policy_attachment" "default" {
|
||||
role = module.eks.fargate_iam_role_name
|
||||
policy_arn = aws_iam_policy.default.arn
|
||||
}
|
||||
```
|
||||
|
||||
#### After 18.x
|
||||
|
||||
```hcl
|
||||
# Attach the policy to an "example" Fargate profile
|
||||
resource "aws_iam_role_policy_attachment" "default" {
|
||||
role = module.eks.fargate_profiles["example"].iam_role_name
|
||||
policy_arn = aws_iam_policy.default.arn
|
||||
}
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```hcl
|
||||
# Attach the policy to all Fargate profiles
|
||||
resource "aws_iam_role_policy_attachment" "default" {
|
||||
for_each = module.eks.fargate_profiles
|
||||
|
||||
role = each.value.iam_role_name
|
||||
policy_arn = aws_iam_policy.default.arn
|
||||
}
|
||||
```
|
||||
209
docs/compute_resourcs.md
Normal file
209
docs/compute_resourcs.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Compute Resources
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [EKS Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#eks-managed-node-groups)
|
||||
- [Self Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#self-managed-node-groups)
|
||||
- [Fargate Profiles](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#fargate-profiles)
|
||||
- [Default Configurations](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#default-configurations)
|
||||
|
||||
ℹ️ Only the pertinent attributes are shown below for brevity
|
||||
|
||||
### EKS Managed Node Groups
|
||||
|
||||
Refer to the [EKS Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) documentation for service related details.
|
||||
|
||||
1. The module creates a custom launch template by default to ensure settings such as tags are propagated to instances. To use the default template provided by the AWS EKS managed node group service, disable the launch template creation and set the `launch_template_name` to an empty string:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
default = {
|
||||
create_launch_template = false
|
||||
launch_template_name = ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Native support for Bottlerocket OS is provided by providing the respective AMI type:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
bottlerocket_default = {
|
||||
create_launch_template = false
|
||||
launch_template_name = ""
|
||||
|
||||
ami_type = "BOTTLEROCKET_x86_64"
|
||||
platform = "bottlerocket"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. Users have limited support to extend the user data that is pre-pended to the user data provided by the AWS EKS Managed Node Group service:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
prepend_userdata = {
|
||||
# See issue https://github.com/awslabs/amazon-eks-ami/issues/844
|
||||
pre_bootstrap_user_data = <<-EOT
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
cat <<-EOF > /etc/profile.d/bootstrap.sh
|
||||
export CONTAINER_RUNTIME="containerd"
|
||||
export USE_MAX_PODS=false
|
||||
export KUBELET_EXTRA_ARGS="--max-pods=110"
|
||||
EOF
|
||||
# Source extra environment variables in bootstrap script
|
||||
sed -i '/^set -o errexit/a\\nsource /etc/profile.d/bootstrap.sh' /etc/eks/bootstrap.sh
|
||||
EOT
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. Bottlerocket OS is supported in a similar manner. However, note that the user data for Bottlerocket OS uses the TOML format:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
bottlerocket_prepend_userdata = {
|
||||
ami_type = "BOTTLEROCKET_x86_64"
|
||||
platform = "bottlerocket"
|
||||
|
||||
bootstrap_extra_args = <<-EOT
|
||||
# extra args added
|
||||
[settings.kernel]
|
||||
lockdown = "integrity"
|
||||
EOT
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
5. When using a custom AMI, the AWS EKS Managed Node Group service will NOT inject the necessary bootstrap script into the supplied user data. Users can elect to provide their own user data to bootstrap and connect or opt in to use the module provided user data:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
custom_ami = {
|
||||
ami_id = "ami-0caf35bc73450c396"
|
||||
|
||||
# By default, EKS managed node groups will not append bootstrap script;
|
||||
# this adds it back in using the default template provided by the module
|
||||
# Note: this assumes the AMI provided is an EKS optimized AMI derivative
|
||||
enable_bootstrap_user_data = true
|
||||
|
||||
bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=20'"
|
||||
|
||||
pre_bootstrap_user_data = <<-EOT
|
||||
export CONTAINER_RUNTIME="containerd"
|
||||
export USE_MAX_PODS=false
|
||||
EOT
|
||||
|
||||
# Because we have full control over the user data supplied, we can also run additional
|
||||
# scripts/configuration changes after the bootstrap script has been run
|
||||
post_bootstrap_user_data = <<-EOT
|
||||
echo "you are free little kubelet!"
|
||||
EOT
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
6. There is similar support for Bottlerocket OS:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_groups = {
|
||||
bottlerocket_custom_ami = {
|
||||
ami_id = "ami-0ff61e0bcfc81dc94"
|
||||
platform = "bottlerocket"
|
||||
|
||||
# use module user data template to bootstrap
|
||||
enable_bootstrap_user_data = true
|
||||
# this will get added to the template
|
||||
bootstrap_extra_args = <<-EOT
|
||||
# extra args added
|
||||
[settings.kernel]
|
||||
lockdown = "integrity"
|
||||
|
||||
[settings.kubernetes.node-labels]
|
||||
"label1" = "foo"
|
||||
"label2" = "bar"
|
||||
|
||||
[settings.kubernetes.node-taints]
|
||||
"dedicated" = "experimental:PreferNoSchedule"
|
||||
"special" = "true:NoSchedule"
|
||||
EOT
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See the [`examples/eks_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group) for a working example of various configurations.
|
||||
|
||||
### Self Managed Node Groups
|
||||
|
||||
Refer to the [Self Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) documentation for service related details.
|
||||
|
||||
1. The `self-managed-node-group` uses the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version by default:
|
||||
|
||||
```hcl
|
||||
cluster_version = "1.21"
|
||||
|
||||
# This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.21
|
||||
self_managed_node_groups = {
|
||||
default = {}
|
||||
}
|
||||
```
|
||||
|
||||
2. To use Bottlerocket, specify the `platform` as `bottlerocket` and supply a Bottlerocket OS AMI:
|
||||
|
||||
```hcl
|
||||
cluster_version = "1.21"
|
||||
|
||||
self_managed_node_groups = {
|
||||
bottlerocket = {
|
||||
platform = "bottlerocket"
|
||||
ami_id = data.aws_ami.bottlerocket_ami.id
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See the [`examples/self_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self_managed_node_group) for a working example of various configurations.
|
||||
|
||||
### Fargate Profiles
|
||||
|
||||
Fargate profiles are straightforward to use and therefore no further details are provided here. See the [`examples/fargate_profile/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile) for a working example of various configurations.
|
||||
|
||||
### Default Configurations
|
||||
|
||||
Each type of compute resource (EKS managed node group, self managed node group, or Fargate profile) provides the option for users to specify a default configuration. These default configurations can be overridden from within the compute resource's individual definition. The order of precedence for configurations (from highest to least precedence):
|
||||
|
||||
- Compute resource individual configuration
|
||||
- Compute resource family default configuration (`eks_managed_node_group_defaults`, `self_managed_node_group_defaults`, `fargate_profile_defaults`)
|
||||
- Module default configuration (see `variables.tf` and `node_groups.tf`)
|
||||
|
||||
For example, the following creates 4 AWS EKS Managed Node Groups:
|
||||
|
||||
```hcl
|
||||
eks_managed_node_group_defaults = {
|
||||
ami_type = "AL2_x86_64"
|
||||
disk_size = 50
|
||||
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
|
||||
}
|
||||
|
||||
eks_managed_node_groups = {
|
||||
# Uses module default configurations overridden by configuration above
|
||||
default = {}
|
||||
|
||||
# This further overrides the instance types used
|
||||
compute = {
|
||||
instance_types = ["c5.large", "c6i.large", "c6d.large"]
|
||||
}
|
||||
|
||||
# This further overrides the instance types and disk size used
|
||||
persistent = {
|
||||
disk_size = 1024
|
||||
instance_types = ["r5.xlarge", "r6i.xlarge", "r5b.xlarge"]
|
||||
}
|
||||
|
||||
# This overrides the OS used
|
||||
bottlerocket = {
|
||||
ami_type = "BOTTLEROCKET_x86_64"
|
||||
platform = "bottlerocket"
|
||||
}
|
||||
}
|
||||
```
|
||||
110
docs/faq.md
Normal file
110
docs/faq.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
- [How do I manage the `aws-auth` configmap?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-manage-the-aws-auth-configmap)
|
||||
- [I received an error: `Error: Invalid for_each argument ...`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#i-received-an-error-error-invalid-for_each-argument-)
|
||||
- [Why are nodes not being registered?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-nodes-not-being-registered)
|
||||
- [Why are there no changes when a node group's `desired_size` is modified?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-there-no-changes-when-a-node-groups-desired_size-is-modified)
|
||||
- [How can I deploy Windows based nodes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-can-i-deploy-windows-based-nodes)
|
||||
- [How do I access compute resource attributes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-access-compute-resource-attributes)
|
||||
|
||||
### How do I manage the `aws-auth` configmap?
|
||||
|
||||
TL;DR - https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1901
|
||||
|
||||
- Users can roll their own equivalent of `kubectl patch ...` using the [`null_resource`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/9a99689cc13147f4afc426b34ba009875a28614e/examples/complete/main.tf#L301-L336)
|
||||
- There is a module that was created to fill this gap that provides a Kubernetes based approach to provision: https://github.com/aidanmelen/terraform-aws-eks-auth
|
||||
- Ideally, one of the following issues are resolved upstream for a more native experience for users:
|
||||
- https://github.com/aws/containers-roadmap/issues/185
|
||||
- https://github.com/hashicorp/terraform-provider-kubernetes/issues/723
|
||||
|
||||
### I received an error: `Error: Invalid for_each argument ...`
|
||||
|
||||
Users may encounter an error such as `Error: Invalid for_each argument - The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply ...`
|
||||
|
||||
This error is due to an upstream issue with [Terraform core](https://github.com/hashicorp/terraform/issues/4149). There are two potential options you can take to help mitigate this issue:
|
||||
|
||||
1. Create the dependent resources before the cluster => `terraform apply -target <your policy or your security group>` and then `terraform apply` for the cluster (or other similar means to just ensure the referenced resources exist before creating the cluster)
|
||||
|
||||
- Note: this is the route users will have to take for adding additional security groups to nodes since there isn't a separate "security group attachment" resource
|
||||
|
||||
2. For additional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below
|
||||
|
||||
```hcl
|
||||
resource "aws_iam_role_policy_attachment" "additional" {
|
||||
for_each = module.eks.eks_managed_node_groups
|
||||
# you could also do the following or any combination:
|
||||
# for_each = merge(
|
||||
# module.eks.eks_managed_node_groups,
|
||||
# module.eks.self_managed_node_group,
|
||||
# module.eks.fargate_profile,
|
||||
# )
|
||||
|
||||
# This policy does not have to exist at the time of cluster creation. Terraform can
|
||||
# deduce the proper order of its creation to avoid errors during creation
|
||||
policy_arn = aws_iam_policy.node_additional.arn
|
||||
role = each.value.iam_role_name
|
||||
}
|
||||
```
|
||||
|
||||
TL;DR - Terraform resource passed into the modules map definition _must_ be known before you can apply the EKS module. The variables this potentially affects are:
|
||||
|
||||
- `cluster_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
|
||||
- `node_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule)
|
||||
- `iam_role_additional_policies` (i.e. - referencing an external policy resource)
|
||||
|
||||
- Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/irsa_autoscale_refresh) example provided.
|
||||
|
||||
### Why are nodes not being registered?
|
||||
|
||||
Nodes not being able to register with the EKS control plane is generally due to networking mis-configurations.
|
||||
|
||||
1. At least one of the cluster endpoints (public or private) must be enabled.
|
||||
|
||||
If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting `cluster_endpoint_public_access_cidrs` is recommended. More info regarding communication with an endpoint is available [here](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html).
|
||||
|
||||
2. Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access:
|
||||
|
||||
- Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules
|
||||
- Nodes in public subnets: ensure that nodes are launched with public IPs (enable through either the module here or your subnet setting defaults)
|
||||
|
||||
**Important: If you apply only the public endpoint and configure the `cluster_endpoint_public_access_cidrs` to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly.**
|
||||
|
||||
3. The private endpoint can also be enabled by setting `cluster_endpoint_private_access = true`. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled.
|
||||
|
||||
4. Nodes need to be able to connect to other AWS services to function (download container images, make API calls to assume roles, etc.). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3.
|
||||
|
||||
### Why are there no changes when a node group's `desired_size` is modified?
|
||||
|
||||
The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created.
|
||||
|
||||
### How can I deploy Windows based nodes?
|
||||
|
||||
To enable Windows support for your EKS cluster, you will need to apply some configuration manually. See the [Enabling Windows Support (Windows/MacOS/Linux)](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html#enable-windows-support).
|
||||
|
||||
In addition, Windows based nodes require an additional cluster RBAC role (`eks:kube-proxy-windows`).
|
||||
|
||||
Note: Windows based node support is limited to a default user data template that is provided due to the lack of Windows support and manual steps required to provision Windows based EKS nodes.
|
||||
|
||||
### How do I access compute resource attributes?
|
||||
|
||||
Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named `eks` as in `module "eks" { ... }`:
|
||||
|
||||
````hcl
|
||||
|
||||
- EKS Managed Node Group attributes
|
||||
|
||||
```hcl
|
||||
eks_managed_role_arns = [for group in module.eks_managed_node_group : group.iam_role_arn]
|
||||
````
|
||||
|
||||
- Self Managed Node Group attributes
|
||||
|
||||
```hcl
|
||||
self_managed_role_arns = [for group in module.self_managed_node_group : group.iam_role_arn]
|
||||
```
|
||||
|
||||
- Fargate Profile attributes
|
||||
|
||||
```hcl
|
||||
fargate_profile_pod_execution_role_arns = [for group in module.fargate_profile : group.fargate_profile_pod_execution_role_arn]
|
||||
```
|
||||
84
docs/irsa_integration.md
Normal file
84
docs/irsa_integration.md
Normal file
@@ -0,0 +1,84 @@
|
||||
|
||||
### IRSA Integration
|
||||
|
||||
An [IAM role for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) module has been created to work in conjunction with this module. The [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks) module has a set of pre-defined IAM policies for common addons. Check [`policy.tf`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/modules/iam-role-for-service-accounts-eks/policies.tf) for a list of the policies currently supported. One example of this integration is shown below, and more can be found in the [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/examples/iam-role-for-service-accounts-eks/main.tf) example directory:
|
||||
|
||||
```hcl
|
||||
module "eks" {
|
||||
source = "terraform-aws-modules/eks/aws"
|
||||
|
||||
cluster_name = "example"
|
||||
cluster_version = "1.21"
|
||||
|
||||
cluster_addons = {
|
||||
vpc-cni = {
|
||||
resolve_conflicts = "OVERWRITE"
|
||||
service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
|
||||
}
|
||||
}
|
||||
|
||||
vpc_id = "vpc-1234556abcdef"
|
||||
subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
|
||||
|
||||
eks_managed_node_group_defaults = {
|
||||
# We are using the IRSA created below for permissions
|
||||
# However, we have to provision a new cluster with the policy attached FIRST
|
||||
# before we can disable. Without this initial policy,
|
||||
# the VPC CNI fails to assign IPs and nodes cannot join the new cluster
|
||||
iam_role_attach_cni_policy = true
|
||||
}
|
||||
|
||||
eks_managed_node_groups = {
|
||||
default = {}
|
||||
}
|
||||
|
||||
tags = {
|
||||
Environment = "dev"
|
||||
Terraform = "true"
|
||||
}
|
||||
}
|
||||
|
||||
module "vpc_cni_irsa" {
|
||||
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
|
||||
|
||||
role_name = "vpc_cni"
|
||||
attach_vpc_cni_policy = true
|
||||
vpc_cni_enable_ipv4 = true
|
||||
|
||||
oidc_providers = {
|
||||
main = {
|
||||
provider_arn = module.eks.oidc_provider_arn
|
||||
namespace_service_accounts = ["kube-system:aws-node"]
|
||||
}
|
||||
}
|
||||
|
||||
tags = {
|
||||
Environment = "dev"
|
||||
Terraform = "true"
|
||||
}
|
||||
}
|
||||
|
||||
module "karpenter_irsa" {
|
||||
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
|
||||
|
||||
role_name = "karpenter_controller"
|
||||
attach_karpenter_controller_policy = true
|
||||
|
||||
karpenter_controller_cluster_id = module.eks.cluster_id
|
||||
karpenter_controller_node_iam_role_arns = [
|
||||
module.eks.eks_managed_node_groups["default"].iam_role_arn
|
||||
]
|
||||
|
||||
oidc_providers = {
|
||||
main = {
|
||||
provider_arn = module.eks.oidc_provider_arn
|
||||
namespace_service_accounts = ["karpenter:karpenter"]
|
||||
}
|
||||
}
|
||||
|
||||
tags = {
|
||||
Environment = "dev"
|
||||
Terraform = "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
68
docs/network_connectivity.md
Normal file
68
docs/network_connectivity.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Network Connectivity
|
||||
|
||||
## Cluster Endpoint
|
||||
|
||||
### Public Endpoint w/ Restricted CIDRs
|
||||
|
||||
When restricting the clusters public endpoint to only the CIDRs specified by users, it is recommended that you also enable the private endpoint, or ensure that the CIDR blocks that you specify include the addresses that nodes and Fargate pods (if you use them) access the public endpoint from.
|
||||
|
||||
Please refer to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) for further information
|
||||
|
||||
## Security Groups
|
||||
|
||||
- Cluster Security Group
|
||||
- This module by default creates a cluster security group ("additional" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This "additional" security group allows users to customize inbound and outbound rules via the module as they see fit
|
||||
- The default inbound/outbound rules provided by the module are derived from the [AWS minimum recommendations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates)
|
||||
- The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster)
|
||||
- Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired
|
||||
- The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error
|
||||
- Users also have the option to supply additional, externally created security groups to the cluster as well via the `cluster_additional_security_group_ids` variable
|
||||
- Lastly, users are able to opt in to attaching the primary security group automatically created by the EKS service by setting `attach_cluster_primary_security_group` = `true` from the root module for the respective node group (or set it within the node group defaults). This security group is not managed by the module; it is created by the EKS service. It permits all traffic within the domain of the security group as well as all egress traffic to the internet.
|
||||
|
||||
- Node Group Security Group(s)
|
||||
- Each node group (EKS Managed Node Group and Self Managed Node Group) by default creates its own security group. By default, this security group does not contain any additional security group rules. It is merely an "empty container" that offers users the ability to opt into any addition inbound our outbound rules as necessary
|
||||
- Users also have the option to supply their own, and/or additional, externally created security group(s) to the node group as well via the `vpc_security_group_ids` variable
|
||||
|
||||
See the example snippet below which adds additional security group rules to the cluster security group as well as the shared node security group (for node-to-node access). Users can use this extensibility to open up network access as they see fit using the security groups provided by the module:
|
||||
|
||||
```hcl
|
||||
...
|
||||
# Extend cluster security group rules
|
||||
cluster_security_group_additional_rules = {
|
||||
egress_nodes_ephemeral_ports_tcp = {
|
||||
description = "To node 1025-65535"
|
||||
protocol = "tcp"
|
||||
from_port = 1025
|
||||
to_port = 65535
|
||||
type = "egress"
|
||||
source_node_security_group = true
|
||||
}
|
||||
}
|
||||
|
||||
# Extend node-to-node security group rules
|
||||
node_security_group_additional_rules = {
|
||||
ingress_self_all = {
|
||||
description = "Node to node all ports/protocols"
|
||||
protocol = "-1"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
type = "ingress"
|
||||
self = true
|
||||
}
|
||||
egress_all = {
|
||||
description = "Node all egress"
|
||||
protocol = "-1"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
type = "egress"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
ipv6_cidr_blocks = ["::/0"]
|
||||
}
|
||||
}
|
||||
...
|
||||
```
|
||||
The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-eks/master/.github/images/security_groups.svg" alt="Security Groups" width="100%">
|
||||
</p>
|
||||
97
docs/user_data.md
Normal file
97
docs/user_data.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# User Data & Bootstrapping
|
||||
|
||||
Users can see the various methods of using and providing user data through the [user data examples](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) as well more detailed information on the design and possible configurations via the [user data module itself](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data)
|
||||
|
||||
## Summary
|
||||
|
||||
- AWS EKS Managed Node Groups
|
||||
- By default, any supplied user data is pre-pended to the user data supplied by the EKS Managed Node Group service
|
||||
- If users supply an `ami_id`, the service no longers supplies user data to bootstrap nodes; users can enable `enable_bootstrap_user_data` and use the module provided user data template, or provide their own user data template
|
||||
- `bottlerocket` platform user data must be in TOML format
|
||||
- Self Managed Node Groups
|
||||
- `linux` platform (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template
|
||||
- `bottlerocket` platform -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template
|
||||
- `windows` platform -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template
|
||||
|
||||
The templates provided by the module can be found under the [templates directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/templates)
|
||||
|
||||
## EKS Managed Node Group
|
||||
|
||||
When using an EKS managed node group, users have 2 primary routes for interacting with the bootstrap user data:
|
||||
|
||||
1. If a value for `ami_id` is not provided, users can supply additional user data that is pre-pended before the EKS Managed Node Group bootstrap user data. You can read more about this process from the [AWS supplied documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data)
|
||||
|
||||
- Users can use the following variables to facilitate this process:
|
||||
|
||||
```hcl
|
||||
pre_bootstrap_user_data = "..."
|
||||
```
|
||||
|
||||
2. If a custom AMI is used, then per the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami), users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process:
|
||||
- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
|
||||
- Users can use the following variables to facilitate this process:
|
||||
```hcl
|
||||
enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template
|
||||
pre_bootstrap_user_data = "..."
|
||||
bootstrap_extra_args = "..."
|
||||
post_bootstrap_user_data = "..."
|
||||
```
|
||||
- If the AMI is **NOT** an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
|
||||
- Users can use the following variables to facilitate this process:
|
||||
```hcl
|
||||
user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template
|
||||
pre_bootstrap_user_data = "..."
|
||||
bootstrap_extra_args = "..."
|
||||
post_bootstrap_user_data = "..."
|
||||
```
|
||||
|
||||
| ℹ️ When using bottlerocket as the desired platform, since the user data for bottlerocket is TOML, all configurations are merged in the one file supplied as user data. Therefore, `pre_bootstrap_user_data` and `post_bootstrap_user_data` are not valid since the bottlerocket OS handles when various settings are applied. If you wish to supply additional configuration settings when using bottlerocket, supply them via the `bootstrap_extra_args` variable. For the linux platform, `bootstrap_extra_args` are settings that will be supplied to the [AWS EKS Optimized AMI bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L14) such as kubelet extra args, etc. See the [bottlerocket GitHub repository documentation](https://github.com/bottlerocket-os/bottlerocket#description-of-settings) for more details on what settings can be supplied via the `bootstrap_extra_args` variable. |
|
||||
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
||||
#### ⚠️ Caveat
|
||||
|
||||
Since the EKS Managed Node Group service provides the necessary bootstrap user data to nodes (unless an `ami_id` is provided), users do not have direct access to settings/variables provided by the EKS optimized AMI [`bootstrap.sh` script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh). Currently, users must employ work-arounds to influence the `bootstrap.sh` script. For example, to enable `containerd` on EKS Managed Node Groups, users can supply the following user data. You can learn more about this issue [here](https://github.com/awslabs/amazon-eks-ami/issues/844):
|
||||
|
||||
```hcl
|
||||
# See issue https://github.com/awslabs/amazon-eks-ami/issues/844
|
||||
pre_bootstrap_user_data = <<-EOT
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
cat <<-EOF > /etc/profile.d/bootstrap.sh
|
||||
export CONTAINER_RUNTIME="containerd"
|
||||
export USE_MAX_PODS=false
|
||||
export KUBELET_EXTRA_ARGS="--max-pods=110"
|
||||
EOF
|
||||
# Source extra environment variables in bootstrap script
|
||||
sed -i '/^set -o errexit/a\\nsource /etc/profile.d/bootstrap.sh' /etc/eks/bootstrap.sh
|
||||
EOT
|
||||
```
|
||||
|
||||
### Self Managed Node Group
|
||||
|
||||
Self managed node groups require users to provide the necessary bootstrap user data. Users can elect to use the user data template provided by the module for their platform/OS or provide their own user data template for rendering by the module.
|
||||
|
||||
- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched:
|
||||
- Users can use the following variables to facilitate this process:
|
||||
```hcl
|
||||
enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template
|
||||
pre_bootstrap_user_data = "..."
|
||||
bootstrap_extra_args = "..."
|
||||
post_bootstrap_user_data = "..."
|
||||
```
|
||||
- If the AMI is **NOT** an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data.
|
||||
- Users can use the following variables to facilitate this process:
|
||||
```hcl
|
||||
user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template
|
||||
pre_bootstrap_user_data = "..."
|
||||
bootstrap_extra_args = "..."
|
||||
post_bootstrap_user_data = "..."
|
||||
```
|
||||
|
||||
### Logic Diagram
|
||||
|
||||
The rough flow of logic that is encapsulated within the `_user_data` module can be represented by the following diagram to better highlight the various manners in which user data can be populated.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-eks/master/.github/images/user_data.svg" alt="User Data" width="60%">
|
||||
</p>
|
||||
Reference in New Issue
Block a user