diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md deleted file mode 100644 index 5312750..0000000 --- a/.github/CONTRIBUTING.md +++ /dev/null @@ -1,33 +0,0 @@ -# Contributing - -When contributing to this repository, please first discuss the change you wish to make via issue, -email, or any other method with the owners of this repository before making a change. - -Please note we have a code of conduct, please follow it in all your interactions with the project. - -## Pull Request Process - -1. Ensure any install or build dependencies are removed before the end of the layer when doing a build. -2. Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations, and container parameters. -3. Once all outstanding comments and checklist items have been addressed, your contribution will be merged! Merged PRs will be included in the next release. The terraform-aws-eks maintainers take care of updating the CHANGELOG as they merge. - -## Checklists for contributions - -- [ ] Add [semantics prefix](#semantic-pull-requests) to your PR or Commits (at least one of your commit groups) -- [ ] CI tests are passing -- [ ] README.md has been updated after any changes to variables and outputs. See https://github.com/terraform-aws-modules/terraform-aws-eks/#doc-generation - -## Semantic Pull Requests - -To generate changelog, Pull Requests or Commits must have semantic and must follow conventional specs below: - -- `feat:` for new features -- `fix:` for bug fixes -- `improvement:` for enhancements -- `docs:` for documentation and examples -- `refactor:` for code refactoring -- `test:` for tests -- `ci:` for CI purpose -- `chore:` for chores stuff - -The `chore` prefix skipped during changelog generation. It can be used for `chore: update changelog` commit message by example. diff --git a/README.md b/README.md index 1a3409a..e1fd754 100644 --- a/README.md +++ b/README.md @@ -4,63 +4,64 @@ Terraform module which creates AWS EKS (Kubernetes) resources [![SWUbanner](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md) +## [Documentation](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs) + +- [Frequently Asked Questions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md) +- [Compute Resources](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/compute_resources.md) +- [IRSA Integration](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/irsa-integration.md) +- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md) +- [Network Connectivity](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md) +- Upgrade Guides + - [Upgrade to v17.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-17.0.md) + - [Upgrade to v18.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-18.0.md) + +### External Documentation + +Please note that we strive to provide a comprehensive suite of documentation for __*configuring and utilizing the module(s)*__ defined here, and that documentation regarding EKS (including EKS managed node group, self managed node group, and Fargate profile) and/or Kubernetes features, usage, etc. are better left up to their respective sources: +- [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) +- [Kubernetes Documentation](https://kubernetes.io/docs/home/) + ## Available Features -- AWS EKS Cluster - AWS EKS Cluster Addons - AWS EKS Identity Provider Configuration - All [node types](https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html) are supported: - [EKS Managed Node Group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - [Self Managed Node Group](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) - [Fargate Profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) -- Support for custom AMI, custom launch template, and custom user data +- Support for custom AMI, custom launch template, and custom user data including custom user data template - Support for Amazon Linux 2 EKS Optimized AMI and Bottlerocket nodes - Windows based node support is limited to a default user data template that is provided due to the lack of Windows support and manual steps required to provision Windows based EKS nodes - Support for module created security group, bring your own security groups, as well as adding additional security group rules to the module created security group(s) -- Support for providing maps of node groups/Fargate profiles to the cluster module definition or use separate node group/Fargate profile sub-modules -- Provisions to provide node group/Fargate profile "default" settings - useful for when creating multiple node groups/Fargate profiles where you want to set a common set of configurations once, and then individual control only select features +- Support for creating node groups/profiles separate from the cluster through the use of sub-modules (same as what is used by root module) +- Support for node group/profile "default" settings - useful for when creating multiple node groups/Fargate profiles where you want to set a common set of configurations once, and then individually control only select features on certain node groups/profiles -### ℹ️ `Error: Invalid for_each argument ...` +### [IRSA Terraform Module](https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks) -Users may encounter an error such as `Error: Invalid for_each argument - The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply ...` +An IAM role for service accounts (IRSA) sub-module has been created to make deploying common addons/controllers easier. Instead of users having to create a custom IAM role with the necessary federated role assumption required for IRSA plus find and craft the associated policy required for the addon/controller, users can create the IRSA role and policy with a few lines of code. See the [`terraform-aws-iam/examples/iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/examples/iam-role-for-service-accounts-eks/main.tf) directory for examples on how to use the IRSA sub-module in conjunction with this (`terraform-aws-eks`) module. -This error is due to an upstream issue with [Terraform core](https://github.com/hashicorp/terraform/issues/4149). There are two potential options you can take to help mitigate this issue: +Some of the addon/controller policies that are currently supported include: -1. Create the dependent resources before the cluster => `terraform apply -target ` and then `terraform apply` for the cluster (or other similar means to just ensure the referenced resources exist before creating the cluster) - - Note: this is the route users will have to take for adding additional security groups to nodes since there isn't a separate "security group attachment" resource -2. For additional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below +- [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) +- [External DNS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-policy) +- [EBS CSI Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json) +- [VPC CNI](https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html) +- [Node Termination Handler](https://github.com/aws/aws-node-termination-handler#5-create-an-iam-role-for-the-pods) +- [Karpenter](https://github.com/aws/karpenter/blob/main/website/content/en/preview/getting-started/cloudformation.yaml) +- [Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/install/iam_policy.json) -```hcl -resource "aws_iam_role_policy_attachment" "additional" { - for_each = module.eks.eks_managed_node_groups - # you could also do the following or any combination: - # for_each = merge( - # module.eks.eks_managed_node_groups, - # module.eks.self_managed_node_group, - # module.eks.fargate_profile, - # ) - - # This policy does not have to exist at the time of cluster creation. Terraform can - # deduce the proper order of its creation to avoid errors during creation - policy_arn = aws_iam_policy.node_additional.arn - role = each.value.iam_role_name -} -``` - -The tl;dr for this issue is that the Terraform resource passed into the modules map definition *must* be known before you can apply the EKS module. The variables this potentially affects are: - -- `cluster_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule) -- `node_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule) -- `iam_role_additional_policies` (i.e. - referencing an external policy resource) +See [terraform-aws-iam/modules/iam-role-for-service-accounts](https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks) for current list of supported addon/controller policies as more are added to the project. ## Usage ```hcl module "eks" { - source = "terraform-aws-modules/eks/aws" + source = "terraform-aws-modules/eks/aws" + version = "~> 18.0" + + cluster_name = "my-cluster" + cluster_version = "1.21" - cluster_name = "my-cluster" - cluster_version = "1.21" cluster_endpoint_private_access = true cluster_endpoint_public_access = true @@ -86,14 +87,14 @@ module "eks" { self_managed_node_group_defaults = { instance_type = "m6i.large" update_launch_template_default_version = true - iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"] + iam_role_additional_policies = [ + "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" + ] } self_managed_node_groups = { one = { - name = "spot-1" - - public_ip = true + name = "mixed-1" max_size = 5 desired_size = 2 @@ -116,29 +117,13 @@ module "eks" { }, ] } - - pre_bootstrap_user_data = <<-EOT - echo "foo" - export FOO=bar - EOT - - bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'" - - post_bootstrap_user_data = <<-EOT - cd /tmp - sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm - sudo systemctl enable amazon-ssm-agent - sudo systemctl start amazon-ssm-agent - EOT } } # EKS Managed Node Group(s) eks_managed_node_group_defaults = { - ami_type = "AL2_x86_64" - disk_size = 50 - instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] - vpc_security_group_ids = [aws_security_group.additional.id] + disk_size = 50 + instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] } eks_managed_node_groups = { @@ -150,21 +135,6 @@ module "eks" { instance_types = ["t3.large"] capacity_type = "SPOT" - labels = { - Environment = "test" - GithubRepo = "terraform-aws-eks" - GithubOrg = "terraform-aws-modules" - } - taints = { - dedicated = { - key = "dedicated" - value = "gpuGroup" - effect = "NO_SCHEDULE" - } - } - tags = { - ExtraTag = "example" - } } } @@ -173,25 +143,10 @@ module "eks" { default = { name = "default" selectors = [ - { - namespace = "kube-system" - labels = { - k8s-app = "kube-dns" - } - }, { namespace = "default" } ] - - tags = { - Owner = "test" - } - - timeouts = { - create = "20m" - delete = "20m" - } } } @@ -202,590 +157,6 @@ module "eks" { } ``` -### IRSA Integration - -An [IAM role for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) module has been created to work in conjunction with the EKS module. The [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks) module has a set of pre-defined IAM policies for common addons/controllers/custom resources to allow users to quickly enable common integrations. Check [`policy.tf`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/modules/iam-role-for-service-accounts-eks/policies.tf) for a list of the policies currently supported. A example of this integration is shown below, and more can be found in the [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/examples/iam-role-for-service-accounts-eks/main.tf) example directory: - -```hcl -module "eks" { - source = "terraform-aws-modules/eks/aws" - - cluster_name = "example" - cluster_version = "1.21" - - cluster_addons = { - vpc-cni = { - resolve_conflicts = "OVERWRITE" - service_account_role_arn = module.vpc_cni_irsa.iam_role_arn - } - } - - vpc_id = "vpc-1234556abcdef" - subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"] - - eks_managed_node_group_defaults = { - # We are using the IRSA created below for permissions - # This is a better practice as well so that the nodes do not have the permission, - # only the VPC CNI addon will have the permission - iam_role_attach_cni_policy = false - } - - eks_managed_node_groups = { - default = {} - } - - tags = { - Environment = "dev" - Terraform = "true" - } -} - -module "vpc_cni_irsa" { - source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" - - role_name = "vpc_cni" - attach_vpc_cni_policy = true - vpc_cni_enable_ipv4 = true - - oidc_providers = { - main = { - provider_arn = module.eks.oidc_provider_arn - namespace_service_accounts = ["kube-system:aws-node"] - } - } - - tags = { - Environment = "dev" - Terraform = "true" - } -} - -module "karpenter_irsa" { - source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" - - role_name = "karpenter_controller" - attach_karpenter_controller_policy = true - - karpenter_controller_cluster_id = module.eks.cluster_id - karpenter_controller_node_iam_role_arns = [ - module.eks.eks_managed_node_groups["default"].iam_role_arn - ] - - oidc_providers = { - main = { - provider_arn = module.eks.oidc_provider_arn - namespace_service_accounts = ["karpenter:karpenter"] - } - } - - tags = { - Environment = "dev" - Terraform = "true" - } -} -``` - -## Node Group Configuration - -⚠️ The configurations shown below are referenced from within the root EKS module; there will be slight differences in the default values provided when compared to the underlying sub-modules (`eks-managed-node-group`, `self-managed-node-group`, and `fargate-profile`). - -### EKS Managed Node Groups - -ℹ️ Only the pertinent attributes are shown for brevity - -1. AWS EKS Managed Node Group can provide its own launch template and utilize the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version. By default, the module creates a launch template to ensure tags are propagated to instances, etc., so we need to disable it to use the default template provided by the AWS EKS managed node group service: - -```hcl - eks_managed_node_groups = { - default = { - create_launch_template = false - launch_template_name = "" - } - } -``` - -2. AWS EKS Managed Node Group also offers native, default support for Bottlerocket OS by simply specifying the AMI type: - -```hcl - eks_managed_node_groups = { - bottlerocket_default = { - create_launch_template = false - launch_template_name = "" - - ami_type = "BOTTLEROCKET_x86_64" - platform = "bottlerocket" - } - } -``` - -3. AWS EKS Managed Node Groups allow you to extend configurations by providing your own launch template and user data that is merged with what the service provides. For example, to provide additional user data before the nodes are bootstrapped as well as supply additional arguments to the bootstrap script: - -```hcl - eks_managed_node_groups = { - extend_config = { - # This is supplied to the AWS EKS Optimized AMI - # bootstrap script https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh - bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=20'" - - # This user data will be injected prior to the user data provided by the - # AWS EKS Managed Node Group service (contains the actually bootstrap configuration) - pre_bootstrap_user_data = <<-EOT - export CONTAINER_RUNTIME="containerd" - export USE_MAX_PODS=false - EOT - } - } -``` - -4. The same configurations extension is offered when utilizing Bottlerocket OS AMIs, but the user data is slightly different. Bottlerocket OS uses a TOML user data file and you can provide additional configuration settings via the `bootstrap_extra_args` variable which gets merged into what is provided by the AWS EKS Managed Node Service: - -```hcl - eks_managed_node_groups = { - bottlerocket_extend_config = { - ami_type = "BOTTLEROCKET_x86_64" - platform = "bottlerocket" - - # this will get added to what AWS provides - bootstrap_extra_args = <<-EOT - # extra args added - [settings.kernel] - lockdown = "integrity" - EOT - } - } -``` - -5. Users can also utilize a custom AMI, but doing so means that AWS EKS Managed Node Group will NOT inject the necessary bootstrap script and configurations into the user data supplied to the launch template. When using a custom AMI, users must also opt in to bootstrapping the nodes via user data and either use the module default user data template or provide your own user data template file: - -```hcl - eks_managed_node_groups = { - custom_ami = { - ami_id = "ami-0caf35bc73450c396" - - # By default, EKS managed node groups will not append bootstrap script; - # this adds it back in using the default template provided by the module - # Note: this assumes the AMI provided is an EKS optimized AMI derivative - enable_bootstrap_user_data = true - - bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=20'" - - pre_bootstrap_user_data = <<-EOT - export CONTAINER_RUNTIME="containerd" - export USE_MAX_PODS=false - EOT - - # Because we have full control over the user data supplied, we can also run additional - # scripts/configuration changes after the bootstrap script has been run - post_bootstrap_user_data = <<-EOT - echo "you are free little kubelet!" - EOT - } - } -``` - -6. Similarly, for Bottlerocket there is similar support: - -```hcl - eks_managed_node_groups = { - bottlerocket_custom_ami = { - ami_id = "ami-0ff61e0bcfc81dc94" - platform = "bottlerocket" - - # use module user data template to bootstrap - enable_bootstrap_user_data = true - # this will get added to the template - bootstrap_extra_args = <<-EOT - # extra args added - [settings.kernel] - lockdown = "integrity" - - [settings.kubernetes.node-labels] - "label1" = "foo" - "label2" = "bar" - - [settings.kubernetes.node-taints] - "dedicated" = "experimental:PreferNoSchedule" - "special" = "true:NoSchedule" - EOT - } - } -``` - -See the [`examples/eks_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group) for a working example of these configurations. - -### Self Managed Node Groups - -ℹ️ Only the pertinent attributes are shown for brevity - -1. By default, the `self-managed-node-group` sub-module will use the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version: - -```hcl - cluster_version = "1.21" - - # This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.21 - self_managed_node_groups = { - default = {} - } -``` - -2. To use Bottlerocket, specify the `platform` as `bottlerocket` and supply the Bottlerocket AMI. The module provided user data for Bottlerocket will be used to bootstrap the nodes created: - -```hcl - cluster_version = "1.21" - - self_managed_node_groups = { - bottlerocket = { - platform = "bottlerocket" - ami_id = data.aws_ami.bottlerocket_ami.id - } - } -``` - -### Fargate Profiles - -Fargate profiles are rather straightforward. Simply supply the necessary information for the desired profile(s). See the [`examples/fargate_profile/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile) for a working example of the various configurations. - -### Mixed Node Groups - -ℹ️ Only the pertinent attributes are shown for brevity - -Users are free to mix and match the different node group types that meet their needs. For example, the following are just an example of the different possibilities: -- AWS EKS Cluster with one or more AWS EKS Managed Node Groups -- AWS EKS Cluster with one or more Self Managed Node Groups -- AWS EKS Cluster with one or more Fargate profiles -- AWS EKS Cluster with one or more AWS EKS Managed Node Groups, one or more Self Managed Node Groups, one or more Fargate profiles - -It is also possible to configure the various node groups of each family differently. Node groups may also be defined outside of the root `eks` module definition by using the provided sub-modules. There are no restrictions on the the various different possibilities provided by the module. - -```hcl - self_managed_node_group_defaults = { - vpc_security_group_ids = [aws_security_group.additional.id] - iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"] - } - - self_managed_node_groups = { - one = { - name = "spot-1" - - public_ip = true - max_size = 5 - desired_size = 2 - - use_mixed_instances_policy = true - mixed_instances_policy = { - instances_distribution = { - on_demand_base_capacity = 0 - on_demand_percentage_above_base_capacity = 10 - spot_allocation_strategy = "capacity-optimized" - } - - override = [ - { - instance_type = "m5.large" - weighted_capacity = "1" - }, - { - instance_type = "m6i.large" - weighted_capacity = "2" - }, - ] - } - - pre_bootstrap_user_data = <<-EOT - echo "foo" - export FOO=bar - EOT - - bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'" - - post_bootstrap_user_data = <<-EOT - cd /tmp - sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm - sudo systemctl enable amazon-ssm-agent - sudo systemctl start amazon-ssm-agent - EOT - } - } - - # EKS Managed Node Group(s) - eks_managed_node_group_defaults = { - ami_type = "AL2_x86_64" - disk_size = 50 - instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] - vpc_security_group_ids = [aws_security_group.additional.id] - } - - eks_managed_node_groups = { - blue = {} - green = { - min_size = 1 - max_size = 10 - desired_size = 1 - - instance_types = ["t3.large"] - capacity_type = "SPOT" - labels = { - Environment = "test" - GithubRepo = "terraform-aws-eks" - GithubOrg = "terraform-aws-modules" - } - - taints = { - dedicated = { - key = "dedicated" - value = "gpuGroup" - effect = "NO_SCHEDULE" - } - } - - update_config = { - max_unavailable_percentage = 50 # or set `max_unavailable` - } - - tags = { - ExtraTag = "example" - } - } - } - - # Fargate Profile(s) - fargate_profiles = { - default = { - name = "default" - selectors = [ - { - namespace = "kube-system" - labels = { - k8s-app = "kube-dns" - } - }, - { - namespace = "default" - } - ] - - tags = { - Owner = "test" - } - - timeouts = { - create = "20m" - delete = "20m" - } - } - } -``` - -See the [`examples/complete/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/complete) for a working example of these configurations. - -### Default configurations - -Each node group type (EKS managed node group, self managed node group, or Fargate profile) provides a default configuration setting that allows users to provide their own default configuration instead of the module's default configuration. This allows users to set a common set of defaults for their node groups and still maintain the ability to override these settings within the specific node group definition. The order of precedence for each node group type roughly follows (from highest to least precedence): -- Node group individual configuration - - Node group family default configuration - - Module default configuration - -These are provided via the following variables for the respective node group family: -- `eks_managed_node_group_defaults` -- `self_managed_node_group_defaults` -- `fargate_profile_defaults` - -For example, the following creates 4 AWS EKS Managed Node Groups: - -```hcl - eks_managed_node_group_defaults = { - ami_type = "AL2_x86_64" - disk_size = 50 - instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] - } - - eks_managed_node_groups = { - # Uses defaults provided by module with the default settings above overriding the module defaults - default = {} - - # This further overrides the instance types used - compute = { - instance_types = ["c5.large", "c6i.large", "c6d.large"] - } - - # This further overrides the instance types and disk size used - persistent = { - disk_size = 1024 - instance_types = ["r5.xlarge", "r6i.xlarge", "r5b.xlarge"] - } - - # This overrides the OS used - bottlerocket = { - ami_type = "BOTTLEROCKET_x86_64" - platform = "bottlerocket" - } - } -``` - -## Module Design Considerations - -### General Notes - -While the module is designed to be flexible and support as many use cases and configurations as possible, there is a limit to what first class support can be provided without over-burdening the complexity of the module. Below are a list of general notes on the design intent captured by this module which hopefully explains some of the decisions that are, or will be made, in terms of what is added/supported natively by the module: - -- Despite the addition of Windows Subsystem for Linux (WSL for short), containerization technology is very much a suite of Linux constructs and therefore Linux is the primary OS supported by this module. In addition, due to the first class support provided by AWS, Bottlerocket OS and Fargate Profiles are also very much fully supported by this module. This module does not make any attempt to NOT support Windows, as in preventing the usage of Windows based nodes, however it is up to users to put in additional effort in order to operate Windows based nodes when using the module. User can refer to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) for further details. What this means is: - - AWS EKS Managed Node Groups default to `linux` as the `platform`, but `bottlerocket` is also supported by AWS (`windows` is not supported by AWS EKS Managed Node groups) - - AWS Self Managed Node Groups also default to `linux` and the default AMI used is the latest AMI for the selected Kubernetes version. If you wish to use a different OS or AMI then you will need to opt in to the necessary configurations to ensure the correct AMI is used in conjunction with the necessary user data to ensure the nodes are launched and joined to your cluster successfully. -- AWS EKS Managed Node groups are currently the preferred route over Self Managed Node Groups for compute nodes. Both operate very similarly - both are backed by autoscaling groups and launch templates deployed and visible within your account. However, AWS EKS Managed Node groups provide a better user experience and offer a more "managed service" experience and therefore has precedence over Self Managed Node Groups. That said, there are currently inherent limitations as AWS continues to rollout additional feature support similar to the level of customization you can achieve with Self Managed Node Groups. When requesting added feature support for AWS EKS Managed Node groups, please ensure you have verified that the feature(s) are 1) supported by AWS and 2) supported by the Terraform AWS provider before submitting a feature request. -- Due to the plethora of tooling and different manners of configuring your cluster, cluster configuration is intentionally left out of the module in order to simplify the module for a broader user base. Previous module versions provided support for managing the aws-auth configmap via the Kubernetes Terraform provider using the now deprecated aws-iam-authenticator; these are no longer included in the module. This module strictly focuses on the infrastructure resources to provision an EKS cluster as well as any supporting AWS resources. How the internals of the cluster are configured and managed is up to users and is outside the scope of this module. There is an output attribute, `aws_auth_configmap_yaml`, that has been provided that can be useful to help bridge this transition. Please see the various examples provided where this attribute is used to ensure that self managed node groups or external node groups have their IAM roles appropriately mapped to the aws-auth configmap. How users elect to manage the aws-auth configmap is left up to their choosing. - -### User Data & Bootstrapping - -There are a multitude of different possible configurations for how module users require their user data to be configured. In order to better support the various combinations from simple, out of the box support provided by the module to full customization of the user data using a template provided by users - the user data has been abstracted out to its own module. Users can see the various methods of using and providing user data through the [user data examples](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) as well more detailed information on the design and possible configurations via the [user data module itself](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data) - -In general (tl;dr): -- AWS EKS Managed Node Groups - - `linux` platform (default) -> user data is pre-pended to the AWS provided bootstrap user data (bash/shell script) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to bootstrap nodes to join the cluster - - `bottlerocket` platform -> user data is merged with the AWS provided bootstrap user data (TOML file) when using the AWS EKS provided AMI, otherwise users need to opt in via `enable_bootstrap_user_data` and use the module provided user data template or provide their own user data template to bootstrap nodes to join the cluster -- Self Managed Node Groups - - `linux` platform (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template - - `bottlerocket` platform -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template - - `windows` platform -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template - -Module provided default templates can be found under the [templates directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/templates) - -### Security Groups - -- Cluster Security Group - - This module by default creates a cluster security group ("additional" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This "additional" security group allows users to customize inbound and outbound rules via the module as they see fit - - The default inbound/outbound rules provided by the module are derived from the [AWS minimum recommendations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates) - - The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster) - - Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired - - The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error - - Users also have the option to supply additional, externally created security groups to the cluster as well via the `cluster_additional_security_group_ids` variable - - Lastly, users are able to opt in to attaching the primary security group automatically created by the EKS service by setting `attach_cluster_primary_security_group` = `true` from the root module for the respective node group (or set it within the node group defaults). This security group is not managed by the module; it is created by the EKS service. It permits all traffic within the domain of the security group as well as all egress traffic to the internet. - -- Node Group Security Group(s) - - Each node group (EKS Managed Node Group and Self Managed Node Group) by default creates its own security group. By default, this security group does not contain any additional security group rules. It is merely an "empty container" that offers users the ability to opt into any addition inbound our outbound rules as necessary - - Users also have the option to supply their own, and/or additional, externally created security group(s) to the node group as well via the `vpc_security_group_ids` variable - -See the example snippet below which adds additional security group rules to the cluster security group as well as the shared node security group (for node-to-node access). Users can use this extensibility to open up network access as they see fit using the security groups provided by the module: - -```hcl - ... - # Extend cluster security group rules - cluster_security_group_additional_rules = { - egress_nodes_ephemeral_ports_tcp = { - description = "To node 1025-65535" - protocol = "tcp" - from_port = 1025 - to_port = 65535 - type = "egress" - source_node_security_group = true - } - } - - # Extend node-to-node security group rules - node_security_group_additional_rules = { - ingress_self_all = { - description = "Node to node all ports/protocols" - protocol = "-1" - from_port = 0 - to_port = 0 - type = "ingress" - self = true - } - egress_all = { - description = "Node all egress" - protocol = "-1" - from_port = 0 - to_port = 0 - type = "egress" - cidr_blocks = ["0.0.0.0/0"] - ipv6_cidr_blocks = ["::/0"] - } - } - ... -``` -The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules: - -

- Security Groups -

- -## Notes - -- Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/irsa_autoscale_refresh) example provided. - -
Frequently Asked Questions
- -

Why are nodes not being registered?

- -Often an issue caused by one of two reasons: -1. Networking or endpoint mis-configuration. -2. Permissions (IAM/RBAC) - -At least one of the cluster public or private endpoints must be enabled to access the cluster to work. If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting `cluster_endpoint_public_access_cidrs` is recommended. More info regarding communication with an endpoint is available [here](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html). - -Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access: - -- Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules -- Nodes in public subnets: ensure that nodes are launched with public IPs is enabled (either through the module here or your subnet setting defaults) - -Important: If you apply only the public endpoint and configure the `cluster_endpoint_public_access_cidrs` to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly. - -Cluster private endpoint can also be enabled by setting `cluster_endpoint_private_access = true` on this module. Node communication to the endpoint stays within the VPC. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled. - -Nodes need to be able to connect to other AWS services plus pull down container images from container registries (ECR). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3. - -

How can I work with the cluster if I disable the public endpoint?

- -You have to interact with the cluster from within the VPC that it is associated with; either through a VPN connection, a bastion EC2 instance, etc. - -

How can I stop Terraform from removing the EKS tags from my VPC and subnets?

- -You need to add the tags to the Terraform definition of the VPC and subnets yourself. See the [basic example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/basic). - -An alternative is to use the aws provider's [`ignore_tags` variable](https://www.terraform.io/docs/providers/aws/#ignore_tags-configuration-block). However this can also cause terraform to display a perpetual difference. - -

Why are there no changes when a node group's desired count is modified?

- -The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow the cluster autoscaler to work correctly so that `terraform apply` does not accidentally remove running workers. You can change the desired count via the CLI or console if you're not using the cluster autoscaler. - -If you are not using autoscaling and want to control the number of nodes via terraform, set the `min_size` and `max_size` for node groups. Before changing those values, you must satisfy AWS `desired_size` constraints (which must be between new min/max values). - -

Why are nodes not recreated when the `launch_template` is recreated?

- -By default the ASG for a self-managed node group is not configured to be recreated when the launch configuration or template changes; you will need to use a process to drain and cycle the nodes. - -If you are NOT using the cluster autoscaler: - -- Add a new instance -- Drain an old node `kubectl drain --force --ignore-daemonsets --delete-local-data ip-xxxxxxx.eu-west-1.compute.internal` -- Wait for pods to be Running -- Terminate the old node instance. ASG will start a new instance -- Repeat the drain and delete process until all old nodes are replaced - -If you are using the cluster autoscaler: - -- Drain an old node `kubectl drain --force --ignore-daemonsets --delete-local-data ip-xxxxxxx.eu-west-1.compute.internal` -- Wait for pods to be Running -- Cluster autoscaler will create new nodes when required -- Repeat until all old nodes are drained -- Cluster autoscaler will terminate the old nodes after 10-60 minutes automatically - -You can also use a third-party tool like Gruntwork's kubergrunt. See the [`eks deploy`](https://github.com/gruntwork-io/kubergrunt#deploy) subcommand. - -Alternatively, use a managed node group instead. -

How can I use Windows workers?

- -To enable Windows support for your EKS cluster, you should apply some configuration manually. See the [Enabling Windows Support (Windows/MacOS/Linux)](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html#enable-windows-support). - -Windows based nodes require an additional cluster role (`eks:kube-proxy-windows`). - -

Worker nodes with labels do not join a 1.16+ cluster

- -As of Kubernetes 1.16, kubelet restricts which labels with names in the `kubernetes.io` namespace can be applied to nodes. Labels such as `kubernetes.io/lifecycle=spot` are no longer allowed; instead use `node.kubernetes.io/lifecycle=spot` - -See your Kubernetes version's documentation for the `--node-labels` kubelet flag for the allowed prefixes. [Documentation for 1.16](https://v1-16.docs.kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) - -
- ## Examples - [Complete](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/complete): EKS Cluster using all available node group types in various combinations demonstrating many of the supported features and configurations @@ -797,8 +168,10 @@ See your Kubernetes version's documentation for the `--node-labels` kubelet fla ## Contributing -Report issues/questions/feature requests via [issues](https://github.com/terraform-aws-modules/terraform-aws-eks/issues/new) -Full contributing [guidelines are covered here](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/.github/CONTRIBUTING.md) +We are grateful to the community for contributing bugfixes and improvements! Please see below to learn how you can take part. + +- [Code of Conduct](https://github.com/terraform-aws-modules/.github/blob/master/CODE_OF_CONDUCT.md) +- [Contributing Guide](https://github.com/terraform-aws-modules/.github/blob/master/CONTRIBUTING.md) ## Requirements diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000..ccbb3f8 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,12 @@ +# Documentation + +## Table of Contents + +- [Frequently Asked Questions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md) +- [Compute Resources](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/compute_resources.md) +- [IRSA Integration](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/irsa-integration.md) +- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md) +- [Network Connectivity](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md) +- Upgrade Guides + - [Upgrade to v17.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-17.0.md) + - [Upgrade to v18.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-18.0.md) diff --git a/.github/UPGRADE-17.0.md b/docs/UPGRADE-17.0.md similarity index 100% rename from .github/UPGRADE-17.0.md rename to docs/UPGRADE-17.0.md diff --git a/UPGRADE-18.0.md b/docs/UPGRADE-18.0.md similarity index 98% rename from UPGRADE-18.0.md rename to docs/UPGRADE-18.0.md index 2d00e6f..29ae3eb 100644 --- a/UPGRADE-18.0.md +++ b/docs/UPGRADE-18.0.md @@ -2,6 +2,8 @@ Please consult the `examples` directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce. +Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared their steps/information for their individual configurations. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all and this has been a very helpful issue for others to share they were able to upgrade. + ## List of backwards incompatible changes - Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state [`We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates.`](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) diff --git a/docs/compute_resourcs.md b/docs/compute_resourcs.md new file mode 100644 index 0000000..556a2fc --- /dev/null +++ b/docs/compute_resourcs.md @@ -0,0 +1,209 @@ +# Compute Resources + +## Table of Contents + +- [EKS Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#eks-managed-node-groups) +- [Self Managed Node Groups](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#self-managed-node-groups) +- [Fargate Profiles](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#fargate-profiles) +- [Default Configurations](https://github.com/terraform-aws-module/terraform-aws-eks/blob/master/docs/node_groups.md#default-configurations) + +ℹ️ Only the pertinent attributes are shown below for brevity + +### EKS Managed Node Groups + +Refer to the [EKS Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) documentation for service related details. + +1. The module creates a custom launch template by default to ensure settings such as tags are propagated to instances. To use the default template provided by the AWS EKS managed node group service, disable the launch template creation and set the `launch_template_name` to an empty string: + +```hcl + eks_managed_node_groups = { + default = { + create_launch_template = false + launch_template_name = "" + } + } +``` + +2. Native support for Bottlerocket OS is provided by providing the respective AMI type: + +```hcl + eks_managed_node_groups = { + bottlerocket_default = { + create_launch_template = false + launch_template_name = "" + + ami_type = "BOTTLEROCKET_x86_64" + platform = "bottlerocket" + } + } +``` + +3. Users have limited support to extend the user data that is pre-pended to the user data provided by the AWS EKS Managed Node Group service: + +```hcl + eks_managed_node_groups = { + prepend_userdata = { + # See issue https://github.com/awslabs/amazon-eks-ami/issues/844 + pre_bootstrap_user_data = <<-EOT + #!/bin/bash + set -ex + cat <<-EOF > /etc/profile.d/bootstrap.sh + export CONTAINER_RUNTIME="containerd" + export USE_MAX_PODS=false + export KUBELET_EXTRA_ARGS="--max-pods=110" + EOF + # Source extra environment variables in bootstrap script + sed -i '/^set -o errexit/a\\nsource /etc/profile.d/bootstrap.sh' /etc/eks/bootstrap.sh + EOT + } + } +``` + +4. Bottlerocket OS is supported in a similar manner. However, note that the user data for Bottlerocket OS uses the TOML format: + +```hcl + eks_managed_node_groups = { + bottlerocket_prepend_userdata = { + ami_type = "BOTTLEROCKET_x86_64" + platform = "bottlerocket" + + bootstrap_extra_args = <<-EOT + # extra args added + [settings.kernel] + lockdown = "integrity" + EOT + } + } +``` + +5. When using a custom AMI, the AWS EKS Managed Node Group service will NOT inject the necessary bootstrap script into the supplied user data. Users can elect to provide their own user data to bootstrap and connect or opt in to use the module provided user data: + +```hcl + eks_managed_node_groups = { + custom_ami = { + ami_id = "ami-0caf35bc73450c396" + + # By default, EKS managed node groups will not append bootstrap script; + # this adds it back in using the default template provided by the module + # Note: this assumes the AMI provided is an EKS optimized AMI derivative + enable_bootstrap_user_data = true + + bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=20'" + + pre_bootstrap_user_data = <<-EOT + export CONTAINER_RUNTIME="containerd" + export USE_MAX_PODS=false + EOT + + # Because we have full control over the user data supplied, we can also run additional + # scripts/configuration changes after the bootstrap script has been run + post_bootstrap_user_data = <<-EOT + echo "you are free little kubelet!" + EOT + } + } +``` + +6. There is similar support for Bottlerocket OS: + +```hcl + eks_managed_node_groups = { + bottlerocket_custom_ami = { + ami_id = "ami-0ff61e0bcfc81dc94" + platform = "bottlerocket" + + # use module user data template to bootstrap + enable_bootstrap_user_data = true + # this will get added to the template + bootstrap_extra_args = <<-EOT + # extra args added + [settings.kernel] + lockdown = "integrity" + + [settings.kubernetes.node-labels] + "label1" = "foo" + "label2" = "bar" + + [settings.kubernetes.node-taints] + "dedicated" = "experimental:PreferNoSchedule" + "special" = "true:NoSchedule" + EOT + } + } +``` + +See the [`examples/eks_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group) for a working example of various configurations. + +### Self Managed Node Groups + +Refer to the [Self Managed Node Group documentation](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) documentation for service related details. + +1. The `self-managed-node-group` uses the latest AWS EKS Optimized AMI (Linux) for the given Kubernetes version by default: + +```hcl + cluster_version = "1.21" + + # This self managed node group will use the latest AWS EKS Optimized AMI for Kubernetes 1.21 + self_managed_node_groups = { + default = {} + } +``` + +2. To use Bottlerocket, specify the `platform` as `bottlerocket` and supply a Bottlerocket OS AMI: + +```hcl + cluster_version = "1.21" + + self_managed_node_groups = { + bottlerocket = { + platform = "bottlerocket" + ami_id = data.aws_ami.bottlerocket_ami.id + } + } +``` + +See the [`examples/self_managed_node_group/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self_managed_node_group) for a working example of various configurations. + +### Fargate Profiles + +Fargate profiles are straightforward to use and therefore no further details are provided here. See the [`examples/fargate_profile/` example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile) for a working example of various configurations. + +### Default Configurations + +Each type of compute resource (EKS managed node group, self managed node group, or Fargate profile) provides the option for users to specify a default configuration. These default configurations can be overridden from within the compute resource's individual definition. The order of precedence for configurations (from highest to least precedence): + +- Compute resource individual configuration + - Compute resource family default configuration (`eks_managed_node_group_defaults`, `self_managed_node_group_defaults`, `fargate_profile_defaults`) + - Module default configuration (see `variables.tf` and `node_groups.tf`) + +For example, the following creates 4 AWS EKS Managed Node Groups: + +```hcl + eks_managed_node_group_defaults = { + ami_type = "AL2_x86_64" + disk_size = 50 + instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] + } + + eks_managed_node_groups = { + # Uses module default configurations overridden by configuration above + default = {} + + # This further overrides the instance types used + compute = { + instance_types = ["c5.large", "c6i.large", "c6d.large"] + } + + # This further overrides the instance types and disk size used + persistent = { + disk_size = 1024 + instance_types = ["r5.xlarge", "r6i.xlarge", "r5b.xlarge"] + } + + # This overrides the OS used + bottlerocket = { + ami_type = "BOTTLEROCKET_x86_64" + platform = "bottlerocket" + } + } +``` diff --git a/docs/faq.md b/docs/faq.md new file mode 100644 index 0000000..d805a5b --- /dev/null +++ b/docs/faq.md @@ -0,0 +1,110 @@ +# Frequently Asked Questions + +- [How do I manage the `aws-auth` configmap?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-manage-the-aws-auth-configmap) +- [I received an error: `Error: Invalid for_each argument ...`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#i-received-an-error-error-invalid-for_each-argument-) +- [Why are nodes not being registered?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-nodes-not-being-registered) +- [Why are there no changes when a node group's `desired_size` is modified?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#why-are-there-no-changes-when-a-node-groups-desired_size-is-modified) +- [How can I deploy Windows based nodes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-can-i-deploy-windows-based-nodes) +- [How do I access compute resource attributes?](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md#how-do-i-access-compute-resource-attributes) + +### How do I manage the `aws-auth` configmap? + +TL;DR - https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1901 + +- Users can roll their own equivalent of `kubectl patch ...` using the [`null_resource`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/9a99689cc13147f4afc426b34ba009875a28614e/examples/complete/main.tf#L301-L336) +- There is a module that was created to fill this gap that provides a Kubernetes based approach to provision: https://github.com/aidanmelen/terraform-aws-eks-auth +- Ideally, one of the following issues are resolved upstream for a more native experience for users: + - https://github.com/aws/containers-roadmap/issues/185 + - https://github.com/hashicorp/terraform-provider-kubernetes/issues/723 + +### I received an error: `Error: Invalid for_each argument ...` + +Users may encounter an error such as `Error: Invalid for_each argument - The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply ...` + +This error is due to an upstream issue with [Terraform core](https://github.com/hashicorp/terraform/issues/4149). There are two potential options you can take to help mitigate this issue: + +1. Create the dependent resources before the cluster => `terraform apply -target ` and then `terraform apply` for the cluster (or other similar means to just ensure the referenced resources exist before creating the cluster) + +- Note: this is the route users will have to take for adding additional security groups to nodes since there isn't a separate "security group attachment" resource + +2. For additional IAM policies, users can attach the policies outside of the cluster definition as demonstrated below + +```hcl +resource "aws_iam_role_policy_attachment" "additional" { + for_each = module.eks.eks_managed_node_groups + # you could also do the following or any combination: + # for_each = merge( + # module.eks.eks_managed_node_groups, + # module.eks.self_managed_node_group, + # module.eks.fargate_profile, + # ) + + # This policy does not have to exist at the time of cluster creation. Terraform can + # deduce the proper order of its creation to avoid errors during creation + policy_arn = aws_iam_policy.node_additional.arn + role = each.value.iam_role_name +} +``` + +TL;DR - Terraform resource passed into the modules map definition _must_ be known before you can apply the EKS module. The variables this potentially affects are: + +- `cluster_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule) +- `node_security_group_additional_rules` (i.e. - referencing an external security group resource in a rule) +- `iam_role_additional_policies` (i.e. - referencing an external policy resource) + +- Setting `instance_refresh_enabled = true` will recreate your worker nodes without draining them first. It is recommended to install [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler) for proper node draining. See the [instance_refresh](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/irsa_autoscale_refresh) example provided. + +### Why are nodes not being registered? + +Nodes not being able to register with the EKS control plane is generally due to networking mis-configurations. + +1. At least one of the cluster endpoints (public or private) must be enabled. + +If you require a public endpoint, setting up both (public and private) and restricting the public endpoint via setting `cluster_endpoint_public_access_cidrs` is recommended. More info regarding communication with an endpoint is available [here](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html). + +2. Nodes need to be able to contact the EKS cluster endpoint. By default, the module only creates a public endpoint. To access the endpoint, the nodes need outgoing internet access: + +- Nodes in private subnets: via a NAT gateway or instance along with the appropriate routing rules +- Nodes in public subnets: ensure that nodes are launched with public IPs (enable through either the module here or your subnet setting defaults) + +**Important: If you apply only the public endpoint and configure the `cluster_endpoint_public_access_cidrs` to restrict access, know that EKS nodes will also use the public endpoint and you must allow access to the endpoint. If not, then your nodes will fail to work correctly.** + +3. The private endpoint can also be enabled by setting `cluster_endpoint_private_access = true`. Ensure that VPC DNS resolution and hostnames are also enabled for your VPC when the private endpoint is enabled. + +4. Nodes need to be able to connect to other AWS services to function (download container images, make API calls to assume roles, etc.). If for some reason you cannot enable public internet access for nodes you can add VPC endpoints to the relevant services: EC2 API, ECR API, ECR DKR and S3. + +### Why are there no changes when a node group's `desired_size` is modified? + +The module is configured to ignore this value. Unfortunately, Terraform does not support variables within the `lifecycle` block. The setting is ignored to allow autoscaling via controllers such as cluster autoscaler or Karpenter to work properly and without interference by Terraform. Changing the desired count must be handled outside of Terraform once the node group is created. + +### How can I deploy Windows based nodes? + +To enable Windows support for your EKS cluster, you will need to apply some configuration manually. See the [Enabling Windows Support (Windows/MacOS/Linux)](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html#enable-windows-support). + +In addition, Windows based nodes require an additional cluster RBAC role (`eks:kube-proxy-windows`). + +Note: Windows based node support is limited to a default user data template that is provided due to the lack of Windows support and manual steps required to provision Windows based EKS nodes. + +### How do I access compute resource attributes? + +Examples of accessing the attributes of the compute resource(s) created by the root module are shown below. Note - the assumption is that your cluster module definition is named `eks` as in `module "eks" { ... }`: + +````hcl + +- EKS Managed Node Group attributes + +```hcl +eks_managed_role_arns = [for group in module.eks_managed_node_group : group.iam_role_arn] +```` + +- Self Managed Node Group attributes + +```hcl +self_managed_role_arns = [for group in module.self_managed_node_group : group.iam_role_arn] +``` + +- Fargate Profile attributes + +```hcl +fargate_profile_pod_execution_role_arns = [for group in module.fargate_profile : group.fargate_profile_pod_execution_role_arn] +``` diff --git a/docs/irsa_integration.md b/docs/irsa_integration.md new file mode 100644 index 0000000..93293e7 --- /dev/null +++ b/docs/irsa_integration.md @@ -0,0 +1,84 @@ + +### IRSA Integration + +An [IAM role for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) module has been created to work in conjunction with this module. The [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks) module has a set of pre-defined IAM policies for common addons. Check [`policy.tf`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/modules/iam-role-for-service-accounts-eks/policies.tf) for a list of the policies currently supported. One example of this integration is shown below, and more can be found in the [`iam-role-for-service-accounts`](https://github.com/terraform-aws-modules/terraform-aws-iam/blob/master/examples/iam-role-for-service-accounts-eks/main.tf) example directory: + +```hcl +module "eks" { + source = "terraform-aws-modules/eks/aws" + + cluster_name = "example" + cluster_version = "1.21" + + cluster_addons = { + vpc-cni = { + resolve_conflicts = "OVERWRITE" + service_account_role_arn = module.vpc_cni_irsa.iam_role_arn + } + } + + vpc_id = "vpc-1234556abcdef" + subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"] + + eks_managed_node_group_defaults = { + # We are using the IRSA created below for permissions + # However, we have to provision a new cluster with the policy attached FIRST + # before we can disable. Without this initial policy, + # the VPC CNI fails to assign IPs and nodes cannot join the new cluster + iam_role_attach_cni_policy = true + } + + eks_managed_node_groups = { + default = {} + } + + tags = { + Environment = "dev" + Terraform = "true" + } +} + +module "vpc_cni_irsa" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + + role_name = "vpc_cni" + attach_vpc_cni_policy = true + vpc_cni_enable_ipv4 = true + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["kube-system:aws-node"] + } + } + + tags = { + Environment = "dev" + Terraform = "true" + } +} + +module "karpenter_irsa" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + + role_name = "karpenter_controller" + attach_karpenter_controller_policy = true + + karpenter_controller_cluster_id = module.eks.cluster_id + karpenter_controller_node_iam_role_arns = [ + module.eks.eks_managed_node_groups["default"].iam_role_arn + ] + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["karpenter:karpenter"] + } + } + + tags = { + Environment = "dev" + Terraform = "true" + } +} +``` diff --git a/docs/network_connectivity.md b/docs/network_connectivity.md new file mode 100644 index 0000000..67805aa --- /dev/null +++ b/docs/network_connectivity.md @@ -0,0 +1,68 @@ +# Network Connectivity + +## Cluster Endpoint + +### Public Endpoint w/ Restricted CIDRs + +When restricting the clusters public endpoint to only the CIDRs specified by users, it is recommended that you also enable the private endpoint, or ensure that the CIDR blocks that you specify include the addresses that nodes and Fargate pods (if you use them) access the public endpoint from. + +Please refer to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) for further information + +## Security Groups + +- Cluster Security Group + - This module by default creates a cluster security group ("additional" security group when viewed from the console) in addition to the default security group created by the AWS EKS service. This "additional" security group allows users to customize inbound and outbound rules via the module as they see fit + - The default inbound/outbound rules provided by the module are derived from the [AWS minimum recommendations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in addition to NTP and HTTPS public internet egress rules (without, these show up in VPC flow logs as rejects - they are used for clock sync and downloading necessary packages/updates) + - The minimum inbound/outbound rules are provided for cluster and node creation to succeed without errors, but users will most likely need to add the necessary port and protocol for node-to-node communication (this is user specific based on how nodes are configured to communicate across the cluster) + - Users have the ability to opt out of the security group creation and instead provide their own externally created security group if so desired + - The security group that is created is designed to handle the bare minimum communication necessary between the control plane and the nodes, as well as any external egress to allow the cluster to successfully launch without error + - Users also have the option to supply additional, externally created security groups to the cluster as well via the `cluster_additional_security_group_ids` variable + - Lastly, users are able to opt in to attaching the primary security group automatically created by the EKS service by setting `attach_cluster_primary_security_group` = `true` from the root module for the respective node group (or set it within the node group defaults). This security group is not managed by the module; it is created by the EKS service. It permits all traffic within the domain of the security group as well as all egress traffic to the internet. + +- Node Group Security Group(s) + - Each node group (EKS Managed Node Group and Self Managed Node Group) by default creates its own security group. By default, this security group does not contain any additional security group rules. It is merely an "empty container" that offers users the ability to opt into any addition inbound our outbound rules as necessary + - Users also have the option to supply their own, and/or additional, externally created security group(s) to the node group as well via the `vpc_security_group_ids` variable + +See the example snippet below which adds additional security group rules to the cluster security group as well as the shared node security group (for node-to-node access). Users can use this extensibility to open up network access as they see fit using the security groups provided by the module: + +```hcl + ... + # Extend cluster security group rules + cluster_security_group_additional_rules = { + egress_nodes_ephemeral_ports_tcp = { + description = "To node 1025-65535" + protocol = "tcp" + from_port = 1025 + to_port = 65535 + type = "egress" + source_node_security_group = true + } + } + + # Extend node-to-node security group rules + node_security_group_additional_rules = { + ingress_self_all = { + description = "Node to node all ports/protocols" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "ingress" + self = true + } + egress_all = { + description = "Node all egress" + protocol = "-1" + from_port = 0 + to_port = 0 + type = "egress" + cidr_blocks = ["0.0.0.0/0"] + ipv6_cidr_blocks = ["::/0"] + } + } + ... +``` +The security groups created by this module are depicted in the image shown below along with their default inbound/outbound rules: + +

+ Security Groups +

diff --git a/docs/user_data.md b/docs/user_data.md new file mode 100644 index 0000000..e5c247b --- /dev/null +++ b/docs/user_data.md @@ -0,0 +1,97 @@ +# User Data & Bootstrapping + +Users can see the various methods of using and providing user data through the [user data examples](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) as well more detailed information on the design and possible configurations via the [user data module itself](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules/_user_data) + +## Summary + +- AWS EKS Managed Node Groups + - By default, any supplied user data is pre-pended to the user data supplied by the EKS Managed Node Group service + - If users supply an `ami_id`, the service no longers supplies user data to bootstrap nodes; users can enable `enable_bootstrap_user_data` and use the module provided user data template, or provide their own user data template + - `bottlerocket` platform user data must be in TOML format +- Self Managed Node Groups + - `linux` platform (default) -> the user data template (bash/shell script) provided by the module is used as the default; users are able to provide their own user data template + - `bottlerocket` platform -> the user data template (TOML file) provided by the module is used as the default; users are able to provide their own user data template + - `windows` platform -> the user data template (powershell/PS1 script) provided by the module is used as the default; users are able to provide their own user data template + +The templates provided by the module can be found under the [templates directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/templates) + +## EKS Managed Node Group + +When using an EKS managed node group, users have 2 primary routes for interacting with the bootstrap user data: + +1. If a value for `ami_id` is not provided, users can supply additional user data that is pre-pended before the EKS Managed Node Group bootstrap user data. You can read more about this process from the [AWS supplied documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data) + + - Users can use the following variables to facilitate this process: + + ```hcl + pre_bootstrap_user_data = "..." + ``` + +2. If a custom AMI is used, then per the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami), users will need to supply the necessary user data to bootstrap and register nodes with the cluster when launched. There are two routes to facilitate this bootstrapping process: + - If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched: + - Users can use the following variables to facilitate this process: + ```hcl + enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template + pre_bootstrap_user_data = "..." + bootstrap_extra_args = "..." + post_bootstrap_user_data = "..." + ``` + - If the AMI is **NOT** an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data. + - Users can use the following variables to facilitate this process: + ```hcl + user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template + pre_bootstrap_user_data = "..." + bootstrap_extra_args = "..." + post_bootstrap_user_data = "..." + ``` + +| ℹ️ When using bottlerocket as the desired platform, since the user data for bottlerocket is TOML, all configurations are merged in the one file supplied as user data. Therefore, `pre_bootstrap_user_data` and `post_bootstrap_user_data` are not valid since the bottlerocket OS handles when various settings are applied. If you wish to supply additional configuration settings when using bottlerocket, supply them via the `bootstrap_extra_args` variable. For the linux platform, `bootstrap_extra_args` are settings that will be supplied to the [AWS EKS Optimized AMI bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L14) such as kubelet extra args, etc. See the [bottlerocket GitHub repository documentation](https://github.com/bottlerocket-os/bottlerocket#description-of-settings) for more details on what settings can be supplied via the `bootstrap_extra_args` variable. | +| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + +#### ⚠️ Caveat + +Since the EKS Managed Node Group service provides the necessary bootstrap user data to nodes (unless an `ami_id` is provided), users do not have direct access to settings/variables provided by the EKS optimized AMI [`bootstrap.sh` script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh). Currently, users must employ work-arounds to influence the `bootstrap.sh` script. For example, to enable `containerd` on EKS Managed Node Groups, users can supply the following user data. You can learn more about this issue [here](https://github.com/awslabs/amazon-eks-ami/issues/844): + +```hcl + # See issue https://github.com/awslabs/amazon-eks-ami/issues/844 + pre_bootstrap_user_data = <<-EOT + #!/bin/bash + set -ex + cat <<-EOF > /etc/profile.d/bootstrap.sh + export CONTAINER_RUNTIME="containerd" + export USE_MAX_PODS=false + export KUBELET_EXTRA_ARGS="--max-pods=110" + EOF + # Source extra environment variables in bootstrap script + sed -i '/^set -o errexit/a\\nsource /etc/profile.d/bootstrap.sh' /etc/eks/bootstrap.sh + EOT +``` + +### Self Managed Node Group + +Self managed node groups require users to provide the necessary bootstrap user data. Users can elect to use the user data template provided by the module for their platform/OS or provide their own user data template for rendering by the module. + +- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched: + - Users can use the following variables to facilitate this process: + ```hcl + enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template + pre_bootstrap_user_data = "..." + bootstrap_extra_args = "..." + post_bootstrap_user_data = "..." + ``` + - If the AMI is **NOT** an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data. + - Users can use the following variables to facilitate this process: + ```hcl + user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template + pre_bootstrap_user_data = "..." + bootstrap_extra_args = "..." + post_bootstrap_user_data = "..." + ``` + +### Logic Diagram + +The rough flow of logic that is encapsulated within the `_user_data` module can be represented by the following diagram to better highlight the various manners in which user data can be populated. + +

+ User Data +

diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 0000000..f417c0a --- /dev/null +++ b/examples/README.md @@ -0,0 +1,8 @@ +# Examples + +Please note - the examples provided serve two primary means: + +1. Show users working examples of the various ways in which the module can be configured and features supported +2. A means of testing/validating module changes + +Please do not mistake the examples provided as "best practices". It is up to users to consult the AWS service documentation for best practices, usage recommendations, etc. diff --git a/modules/_user_data/README.md b/modules/_user_data/README.md index 51e7b92..87da77b 100644 --- a/modules/_user_data/README.md +++ b/modules/_user_data/README.md @@ -1,77 +1,8 @@ -# Internal User Data Module +# User Data Module -Configuration in this directory renders the appropriate user data for the given inputs. There are a number of different ways that user data can be utilized and this internal module is designed to aid in making that flexibility possible as well as providing a means for out of bands testing and validation. +Configuration in this directory renders the appropriate user data for the given inputs. See [`docs/user_data.md`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md) for more info. -See the [`examples/user_data/` directory](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) for various examples of using the module. - -## Combinations - -At a high level, AWS EKS users have two methods for launching nodes within this EKS module (ignoring Fargate profiles): - -1. EKS managed node group -2. Self managed node group - -### EKS Managed Node Group - -When using an EKS managed node group, users have 2 primary routes for interacting with the bootstrap user data: - -1. If the EKS managed node group does **NOT** utilize a custom AMI, then users can elect to supply additional user data that is pre-pended before the EKS managed node group bootstrap user data. You can read more about this process from the [AWS supplied documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data) - - - Users can use the following variables to facilitate this process: - - ```hcl - pre_bootstrap_user_data = "..." - ``` - -2. If the EKS managed node group does utilize a custom AMI, then per the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami), users will need to supply the necessary bootstrap configuration via user data to ensure that the node is configured to register with the cluster when launched. There are two routes that users can utilize to facilitate this bootstrapping process: - - If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched, with the option to add additional pre and post bootstrap user data as well as bootstrap additional args that are supplied to the [AWS EKS bootstrap.sh script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) - - Users can use the following variables to facilitate this process: - ```hcl - enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template - pre_bootstrap_user_data = "..." - bootstrap_extra_args = "..." - post_bootstrap_user_data = "..." - ``` - - If the AMI is not an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node when launched, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data. - - Users can use the following variables to facilitate this process: - ```hcl - user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template - pre_bootstrap_user_data = "..." - bootstrap_extra_args = "..." - post_bootstrap_user_data = "..." - ``` - -| ℹ️ When using bottlerocket as the desired platform, since the user data for bottlerocket is TOML, all configurations are merged in the one file supplied as user data. Therefore, `pre_bootstrap_user_data` and `post_bootstrap_user_data` are not valid since the bottlerocket OS handles when various settings are applied. If you wish to supply additional configuration settings when using bottlerocket, supply them via the `bootstrap_extra_args` variable. For the linux platform, `bootstrap_extra_args` are settings that will be supplied to the [AWS EKS Optimized AMI bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L14) such as kubelet extra args, etc. See the [bottlerocket GitHub repository documentation](https://github.com/bottlerocket-os/bottlerocket#description-of-settings) for more details on what settings can be supplied via the `bootstrap_extra_args` variable. | -| :--- | - -### Self Managed Node Group - -When using a self managed node group, the options presented to users is very similar to the 2nd option listed above for EKS managed node groups. Since self managed node groups require users to provide the bootstrap user data, there is no concept of appending to user data that AWS provides; users can either elect to use the user data template provided for their platform/OS by the module or provide their own user data template for rendering by the module. - -- If the AMI used is a derivative of the [AWS EKS Optimized AMI ](https://github.com/awslabs/amazon-eks-ami), users can opt in to using a template provided by the module that provides the minimum necessary configuration to bootstrap the node when launched, with the option to add additional pre and post bootstrap user data as well as bootstrap additional args that are supplied to the [AWS EKS bootstrap.sh script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) - - Users can use the following variables to facilitate this process: - ```hcl - enable_bootstrap_user_data = true # to opt in to using the module supplied bootstrap user data template - pre_bootstrap_user_data = "..." - bootstrap_extra_args = "..." - post_bootstrap_user_data = "..." - ``` -- If the AMI is not an AWS EKS Optimized AMI derivative, or if users wish to have more control over the user data that is supplied to the node upon launch, users have the ability to supply their own user data template that will be rendered instead of the module supplied template. Note - only the variables that are supplied to the `templatefile()` for the respective platform/OS are available for use in the supplied template, otherwise users will need to pre-render/pre-populate the template before supplying the final template to the module for rendering as user data. - - Users can use the following variables to facilitate this process: - ```hcl - user_data_template_path = "./your/user_data.sh" # user supplied bootstrap user data template - pre_bootstrap_user_data = "..." - bootstrap_extra_args = "..." - post_bootstrap_user_data = "..." - ``` - -### Logic Diagram - -The rough flow of logic that is encapsulated within the `_user_data` internal module can be represented by the following diagram to better highlight the various manners in which user data can be populated. - -

- User Data -

+See [`examples/user_data/`](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data) for various examples using this module. ## Requirements