# AWS EKS Terraform module Terraform module which creates Amazon EKS (Kubernetes) resources [](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md) ## [Documentation](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs) - [Frequently Asked Questions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/faq.md) - [Compute Resources](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/compute_resources.md) - [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/user_data.md) - [Network Connectivity](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md) - Upgrade Guides - [Upgrade to v17.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-17.0.md) - [Upgrade to v18.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-18.0.md) - [Upgrade to v19.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-19.0.md) - [Upgrade to v20.x](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/UPGRADE-20.0.md) ### External Documentation Please note that we strive to provide a comprehensive suite of documentation for __*configuring and utilizing the module(s)*__ defined here, and that documentation regarding EKS (including EKS managed node group, self managed node group, and Fargate profile) and/or Kubernetes features, usage, etc. are better left up to their respective sources: - [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) - [Kubernetes Documentation](https://kubernetes.io/docs/home/) ## Usage ### EKS Auto Mode ```hcl module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.31" cluster_name = "example" cluster_version = "1.33" # Optional cluster_endpoint_public_access = true # Optional: Adds the current caller identity as an administrator via cluster access entry enable_cluster_creator_admin_permissions = true cluster_compute_config = { enabled = true node_pools = ["general-purpose"] } vpc_id = "vpc-1234556abcdef" subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"] tags = { Environment = "dev" Terraform = "true" } } ``` ### EKS Hybrid Nodes ```hcl locals { # RFC 1918 IP ranges supported remote_network_cidr = "172.16.0.0/16" remote_node_cidr = cidrsubnet(local.remote_network_cidr, 2, 0) remote_pod_cidr = cidrsubnet(local.remote_network_cidr, 2, 1) } # SSM and IAM Roles Anywhere supported - SSM is default module "eks_hybrid_node_role" { source = "terraform-aws-modules/eks/aws//modules/hybrid-node-role" version = "~> 20.31" tags = { Environment = "dev" Terraform = "true" } } module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.31" cluster_name = "example" cluster_version = "1.33" cluster_addons = { coredns = {} eks-pod-identity-agent = {} kube-proxy = {} } # Optional cluster_endpoint_public_access = true # Optional: Adds the current caller identity as an administrator via cluster access entry enable_cluster_creator_admin_permissions = true create_node_security_group = false cluster_security_group_additional_rules = { hybrid-all = { cidr_blocks = [local.remote_network_cidr] description = "Allow all traffic from remote node/pod network" from_port = 0 to_port = 0 protocol = "all" type = "ingress" } } # Optional cluster_compute_config = { enabled = true node_pools = ["system"] } access_entries = { hybrid-node-role = { principal_arn = module.eks_hybrid_node_role.arn type = "HYBRID_LINUX" } } vpc_id = "vpc-1234556abcdef" subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"] cluster_remote_network_config = { remote_node_networks = { cidrs = [local.remote_node_cidr] } # Required if running webhooks on Hybrid nodes remote_pod_networks = { cidrs = [local.remote_pod_cidr] } } tags = { Environment = "dev" Terraform = "true" } } ``` ### EKS Managed Node Group ```hcl module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.0" cluster_name = "my-cluster" cluster_version = "1.33" bootstrap_self_managed_addons = false cluster_addons = { coredns = {} eks-pod-identity-agent = {} kube-proxy = {} vpc-cni = {} } # Optional cluster_endpoint_public_access = true # Optional: Adds the current caller identity as an administrator via cluster access entry enable_cluster_creator_admin_permissions = true vpc_id = "vpc-1234556abcdef" subnet_ids = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"] control_plane_subnet_ids = ["subnet-xyzde987", "subnet-slkjf456", "subnet-qeiru789"] # EKS Managed Node Group(s) eks_managed_node_group_defaults = { instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] } eks_managed_node_groups = { example = { # Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups ami_type = "AL2023_x86_64_STANDARD" instance_types = ["m5.xlarge"] min_size = 2 max_size = 10 desired_size = 2 } } tags = { Environment = "dev" Terraform = "true" } } ``` ### Cluster Access Entry When enabling `authentication_mode = "API_AND_CONFIG_MAP"`, EKS will automatically create an access entry for the IAM role(s) used by managed node group(s) and Fargate profile(s). There are no additional actions required by users. For self-managed node groups and the Karpenter sub-module, this project automatically adds the access entry on behalf of users so there are no additional actions required by users. On clusters that were created prior to cluster access management (CAM) support, there will be an existing access entry for the cluster creator. This was previously not visible when using `aws-auth` ConfigMap, but will become visible when access entry is enabled. ```hcl module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.0" # Truncated for brevity ... access_entries = { # One access entry with a policy associated example = { principal_arn = "arn:aws:iam::123456789012:role/something" policy_associations = { example = { policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy" access_scope = { namespaces = ["default"] type = "namespace" } } } } } } ``` ### Bootstrap Cluster Creator Admin Permissions Setting the `bootstrap_cluster_creator_admin_permissions` is a one time operation when the cluster is created; it cannot be modified later through the EKS API. In this project we are hardcoding this to `false`. If users wish to achieve the same functionality, we will do that through an access entry which can be enabled or disabled at any time of their choosing using the variable `enable_cluster_creator_admin_permissions` ### Enabling EFA Support When enabling EFA support via `enable_efa_support = true`, there are two locations this can be specified - one at the cluster level, and one at the node group level. Enabling at the cluster level will add the EFA required ingress/egress rules to the shared security group created for the node group(s). Enabling at the node group level will do the following (per node group where enabled): 1. All EFA interfaces supported by the instance will be exposed on the launch template used by the node group 2. A placement group with `strategy = "clustered"` per EFA requirements is created and passed to the launch template used by the node group 3. Data sources will reverse lookup the availability zones that support the instance type selected based on the subnets provided, ensuring that only the associated subnets are passed to the launch template and therefore used by the placement group. This avoids the placement group being created in an availability zone that does not support the instance type selected. > [!TIP] > Use the [aws-efa-k8s-device-plugin](https://github.com/aws/eks-charts/tree/master/stable/aws-efa-k8s-device-plugin) Helm chart to expose the EFA interfaces on the nodes as an extended resource, and allow pods to request the interfaces be mounted to their containers. > > The EKS AL2 GPU AMI comes with the necessary EFA components pre-installed - you just need to expose the EFA devices on the nodes via their launch templates, ensure the required EFA security group rules are in place, and deploy the `aws-efa-k8s-device-plugin` in order to start utilizing EFA within your cluster. Your application container will need to have the necessary libraries and runtime in order to utilize communication over the EFA interfaces (NCCL, aws-ofi-nccl, hwloc, libfabric, aws-neuornx-collectives, CUDA, etc.). If you disable the creation and use of the managed node group custom launch template (`create_launch_template = false` and/or `use_custom_launch_template = false`), this will interfere with the EFA functionality provided. In addition, if you do not supply an `instance_type` for self-managed node group(s), or `instance_types` for the managed node group(s), this will also interfere with the functionality. In order to support the EFA functionality provided by `enable_efa_support = true`, you must utilize the custom launch template created/provided by this module, and supply an `instance_type`/`instance_types` for the respective node group. The logic behind supporting EFA uses a data source to lookup the instance type to retrieve the number of interfaces that the instance supports in order to enumerate and expose those interfaces on the launch template created. For managed node groups where a list of instance types are supported, the first instance type in the list is used to calculate the number of EFA interfaces supported. Mixing instance types with varying number of interfaces is not recommended for EFA (or in some cases, mixing instance types is not supported - i.e. - p5.48xlarge and p4d.24xlarge). In addition to exposing the EFA interfaces and updating the security group rules, a placement group is created per the EFA requirements and only the availability zones that support the instance type selected are used in the subnets provided to the node group. In order to enable EFA support, you will have to specify `enable_efa_support = true` on both the cluster and each node group that you wish to enable EFA support for: ```hcl module "eks" { source = "terraform-aws-modules/eks/aws" version = "~> 20.0" # Truncated for brevity ... # Adds the EFA required security group rules to the shared # security group created for the node group(s) enable_efa_support = true eks_managed_node_groups = { example = { # The EKS AL2023 NVIDIA AMI provides all of the necessary components # for accelerated workloads w/ EFA ami_type = "AL2023_x86_64_NVIDIA" instance_types = ["p5.48xlarge"] # Exposes all EFA interfaces on the launch template created by the node group(s) # This would expose all 32 EFA interfaces for the p5.48xlarge instance type enable_efa_support = true # Mount instance store volumes in RAID-0 for kubelet and containerd # https://github.com/awslabs/amazon-eks-ami/blob/master/doc/USER_GUIDE.md#raid-0-for-kubelet-and-containerd-raid0 cloudinit_pre_nodeadm = [ { content_type = "application/node.eks.aws" content = <<-EOT --- apiVersion: node.eks.aws/v1alpha1 kind: NodeConfig spec: instance: localStorage: strategy: RAID0 EOT } ] # EFA should only be enabled when connecting 2 or more nodes # Do not use EFA on a single node workload min_size = 2 max_size = 10 desired_size = 2 } } } ``` ## Examples - [EKS Auto Mode](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks-auto-mode): EKS Cluster with EKS Auto Mode - [EKS Hybrid Nodes](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks-hybrid-nodes): EKS Cluster with EKS Hybrid nodes - [EKS Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks-managed-node-group): EKS Cluster with EKS managed node group(s) - [Karpenter](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/karpenter): EKS Cluster with [Karpenter](https://karpenter.sh/) provisioned for intelligent data plane management - [Self Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self-managed-node-group): EKS Cluster with self-managed node group(s) ## Contributing We are grateful to the community for contributing bugfixes and improvements! Please see below to learn how you can take part. - [Code of Conduct](https://github.com/terraform-aws-modules/.github/blob/master/CODE_OF_CONDUCT.md) - [Contributing Guide](https://github.com/terraform-aws-modules/.github/blob/master/CONTRIBUTING.md) ## Requirements | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.2 | | [aws](#requirement\_aws) | >= 5.95, < 6.0.0 | | [time](#requirement\_time) | >= 0.9 | | [tls](#requirement\_tls) | >= 3.0 | ## Providers | Name | Version | |------|---------| | [aws](#provider\_aws) | >= 5.95, < 6.0.0 | | [time](#provider\_time) | >= 0.9 | | [tls](#provider\_tls) | >= 3.0 | ## Modules | Name | Source | Version | |------|--------|---------| | [eks\_managed\_node\_group](#module\_eks\_managed\_node\_group) | ./modules/eks-managed-node-group | n/a | | [fargate\_profile](#module\_fargate\_profile) | ./modules/fargate-profile | n/a | | [kms](#module\_kms) | terraform-aws-modules/kms/aws | 2.1.0 | | [self\_managed\_node\_group](#module\_self\_managed\_node\_group) | ./modules/self-managed-node-group | n/a | ## Resources | Name | Type | |------|------| | [aws_cloudwatch_log_group.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) | resource | | [aws_ec2_tag.cluster_primary_security_group](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_tag) | resource | | [aws_eks_access_entry.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_access_entry) | resource | | [aws_eks_access_policy_association.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_access_policy_association) | resource | | [aws_eks_addon.before_compute](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) | resource | | [aws_eks_addon.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon) | resource | | [aws_eks_cluster.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster) | resource | | [aws_eks_identity_provider_config.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_identity_provider_config) | resource | | [aws_iam_openid_connect_provider.oidc_provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_openid_connect_provider) | resource | | [aws_iam_policy.cluster_encryption](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | | [aws_iam_policy.cni_ipv6_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | | [aws_iam_policy.custom](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | | [aws_iam_role.eks_auto](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | | [aws_iam_role.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | | [aws_iam_role_policy_attachment.additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_iam_role_policy_attachment.cluster_encryption](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_iam_role_policy_attachment.custom](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_iam_role_policy_attachment.eks_auto](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_iam_role_policy_attachment.eks_auto_additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_iam_role_policy_attachment.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource | | [aws_security_group.cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | | [aws_security_group.node](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource | | [aws_security_group_rule.cluster](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource | | [aws_security_group_rule.node](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource | | [time_sleep.this](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource | | [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source | | [aws_eks_addon_version.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_addon_version) | data source | | [aws_iam_policy_document.assume_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_iam_policy_document.cni_ipv6_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_iam_policy_document.custom](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_iam_policy_document.node_assume_role_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | | [aws_iam_session_context.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_session_context) | data source | | [aws_partition.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) | data source | | [tls_certificate.this](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/data-sources/certificate) | data source | ## Inputs | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| | [access\_entries](#input\_access\_entries) | Map of access entries to add to the cluster | `any` | `{}` | no | | [attach\_cluster\_encryption\_policy](#input\_attach\_cluster\_encryption\_policy) | Indicates whether or not to attach an additional policy for the cluster IAM role to utilize the encryption key provided | `bool` | `true` | no | | [authentication\_mode](#input\_authentication\_mode) | The authentication mode for the cluster. Valid values are `CONFIG_MAP`, `API` or `API_AND_CONFIG_MAP` | `string` | `"API_AND_CONFIG_MAP"` | no | | [bootstrap\_self\_managed\_addons](#input\_bootstrap\_self\_managed\_addons) | Indicates whether or not to bootstrap self-managed addons after the cluster has been created | `bool` | `null` | no | | [cloudwatch\_log\_group\_class](#input\_cloudwatch\_log\_group\_class) | Specified the log class of the log group. Possible values are: `STANDARD` or `INFREQUENT_ACCESS` | `string` | `null` | no | | [cloudwatch\_log\_group\_kms\_key\_id](#input\_cloudwatch\_log\_group\_kms\_key\_id) | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | `string` | `null` | no | | [cloudwatch\_log\_group\_retention\_in\_days](#input\_cloudwatch\_log\_group\_retention\_in\_days) | Number of days to retain log events. Default retention - 90 days | `number` | `90` | no | | [cloudwatch\_log\_group\_tags](#input\_cloudwatch\_log\_group\_tags) | A map of additional tags to add to the cloudwatch log group created | `map(string)` | `{}` | no | | [cluster\_additional\_security\_group\_ids](#input\_cluster\_additional\_security\_group\_ids) | List of additional, externally created security group IDs to attach to the cluster control plane | `list(string)` | `[]` | no | | [cluster\_addons](#input\_cluster\_addons) | Map of cluster addon configurations to enable for the cluster. Addon name can be the map keys or set with `name` | `any` | `{}` | no | | [cluster\_addons\_timeouts](#input\_cluster\_addons\_timeouts) | Create, update, and delete timeout configurations for the cluster addons | `map(string)` | `{}` | no | | [cluster\_compute\_config](#input\_cluster\_compute\_config) | Configuration block for the cluster compute configuration | `any` | `{}` | no | | [cluster\_enabled\_log\_types](#input\_cluster\_enabled\_log\_types) | A list of the desired control plane logs to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` |
[| no | | [cluster\_encryption\_config](#input\_cluster\_encryption\_config) | Configuration block with encryption configuration for the cluster. To disable secret encryption, set this value to `{}` | `any` |
"audit",
"api",
"authenticator"
]
{
"resources": [
"secrets"
]
} | no |
| [cluster\_encryption\_policy\_description](#input\_cluster\_encryption\_policy\_description) | Description of the cluster encryption policy created | `string` | `"Cluster encryption policy to allow cluster role to utilize CMK provided"` | no |
| [cluster\_encryption\_policy\_name](#input\_cluster\_encryption\_policy\_name) | Name to use on cluster encryption policy created | `string` | `null` | no |
| [cluster\_encryption\_policy\_path](#input\_cluster\_encryption\_policy\_path) | Cluster encryption policy path | `string` | `null` | no |
| [cluster\_encryption\_policy\_tags](#input\_cluster\_encryption\_policy\_tags) | A map of additional tags to add to the cluster encryption policy created | `map(string)` | `{}` | no |
| [cluster\_encryption\_policy\_use\_name\_prefix](#input\_cluster\_encryption\_policy\_use\_name\_prefix) | Determines whether cluster encryption policy name (`cluster_encryption_policy_name`) is used as a prefix | `bool` | `true` | no |
| [cluster\_endpoint\_private\_access](#input\_cluster\_endpoint\_private\_access) | Indicates whether or not the Amazon EKS private API server endpoint is enabled | `bool` | `true` | no |
| [cluster\_endpoint\_public\_access](#input\_cluster\_endpoint\_public\_access) | Indicates whether or not the Amazon EKS public API server endpoint is enabled | `bool` | `false` | no |
| [cluster\_endpoint\_public\_access\_cidrs](#input\_cluster\_endpoint\_public\_access\_cidrs) | List of CIDR blocks which can access the Amazon EKS public API server endpoint | `list(string)` | [| no | | [cluster\_force\_update\_version](#input\_cluster\_force\_update\_version) | Force version update by overriding upgrade-blocking readiness checks when updating a cluster | `bool` | `null` | no | | [cluster\_identity\_providers](#input\_cluster\_identity\_providers) | Map of cluster identity provider configurations to enable for the cluster. Note - this is different/separate from IRSA | `any` | `{}` | no | | [cluster\_ip\_family](#input\_cluster\_ip\_family) | The IP family used to assign Kubernetes pod and service addresses. Valid values are `ipv4` (default) and `ipv6`. You can only specify an IP family when you create a cluster, changing this value will force a new cluster to be created | `string` | `"ipv4"` | no | | [cluster\_name](#input\_cluster\_name) | Name of the EKS cluster | `string` | `""` | no | | [cluster\_remote\_network\_config](#input\_cluster\_remote\_network\_config) | Configuration block for the cluster remote network configuration | `any` | `{}` | no | | [cluster\_security\_group\_additional\_rules](#input\_cluster\_security\_group\_additional\_rules) | List of additional security group rules to add to the cluster security group created. Set `source_node_security_group = true` inside rules to set the `node_security_group` as source | `any` | `{}` | no | | [cluster\_security\_group\_description](#input\_cluster\_security\_group\_description) | Description of the cluster security group created | `string` | `"EKS cluster security group"` | no | | [cluster\_security\_group\_id](#input\_cluster\_security\_group\_id) | Existing security group ID to be attached to the cluster | `string` | `""` | no | | [cluster\_security\_group\_name](#input\_cluster\_security\_group\_name) | Name to use on cluster security group created | `string` | `null` | no | | [cluster\_security\_group\_tags](#input\_cluster\_security\_group\_tags) | A map of additional tags to add to the cluster security group created | `map(string)` | `{}` | no | | [cluster\_security\_group\_use\_name\_prefix](#input\_cluster\_security\_group\_use\_name\_prefix) | Determines whether cluster security group name (`cluster_security_group_name`) is used as a prefix | `bool` | `true` | no | | [cluster\_service\_ipv4\_cidr](#input\_cluster\_service\_ipv4\_cidr) | The CIDR block to assign Kubernetes service IP addresses from. If you don't specify a block, Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks | `string` | `null` | no | | [cluster\_service\_ipv6\_cidr](#input\_cluster\_service\_ipv6\_cidr) | The CIDR block to assign Kubernetes pod and service IP addresses from if `ipv6` was specified when the cluster was created. Kubernetes assigns service addresses from the unique local address range (fc00::/7) because you can't specify a custom IPv6 CIDR block when you create the cluster | `string` | `null` | no | | [cluster\_tags](#input\_cluster\_tags) | A map of additional tags to add to the cluster | `map(string)` | `{}` | no | | [cluster\_timeouts](#input\_cluster\_timeouts) | Create, update, and delete timeout configurations for the cluster | `map(string)` | `{}` | no | | [cluster\_upgrade\_policy](#input\_cluster\_upgrade\_policy) | Configuration block for the cluster upgrade policy | `any` | `{}` | no | | [cluster\_version](#input\_cluster\_version) | Kubernetes `
"0.0.0.0/0"
]