29 KiB
Upgrade from v17.x to v18.x
Please consult the examples directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.
Note: please see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744 where users have shared the steps/changes that have worked for their configurations to upgrade. Due to the numerous configuration possibilities, it is difficult to capture specific steps that will work for all; this has proven to be a useful thread to share collective information from the broader community regarding v18.x upgrades.
For most users, adding the following to your v17.x configuration will preserve the state of your cluster control plane when upgrading to v18.x:
prefix_separator = ""
iam_role_name = $CLUSTER_NAME
cluster_security_group_name = $CLUSTER_NAME
cluster_security_group_description = "EKS cluster security group."
This configuration assumes that create_iam_role is set to true, which is the default value.
As the location of the Terraform state of the IAM role has been changed from 17.x to 18.x, you'll also have to move the state before running terraform apply by calling:
terraform state mv 'module.eks.aws_iam_role.cluster[0]' 'module.eks.aws_iam_role.this[0]'
See more information here
List of backwards incompatible changes
- Launch configuration support has been removed and only launch template is supported going forward. AWS is no longer adding new features back into launch configuration and their docs state
We strongly recommend that you do not use launch configurations. They do not provide full functionality for Amazon EC2 Auto Scaling or Amazon EC2. We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates. - Support for managing aws-auth configmap has been removed. This change also removes the dependency on the Kubernetes Terraform provider, the local dependency on aws-iam-authenticator for users, as well as the reliance on the forked http provider to wait and poll on cluster creation. To aid users in this change, an output variable
aws_auth_configmap_yamlhas been provided which renders the aws-auth configmap necessary to support at least the IAM roles used by the module (additional mapRoles/mapUsers definitions to be provided by users) - Support for managing kubeconfig and its associated
local_fileresources have been removed; users are able to use the awscli providedaws eks update-kubeconfig --name <cluster_name>to update their local kubeconfig as necessary - The terminology used in the module has been modified to reflect that used by the AWS documentation.
- AWS EKS Managed Node Group,
eks_managed_node_groups, was previously referred to as simply node group,node_groups - Self Managed Node Group Group,
self_managed_node_groups, was previously referred to as worker group,worker_groups - AWS Fargate Profile,
fargate_profiles, remains unchanged in terms of naming and terminology
- AWS EKS Managed Node Group,
- The three different node group types supported by AWS and the module have been refactored into standalone sub-modules that are both used by the root
eksmodule as well as available for individual, standalone consumption if desired.- The previous
node_groupssub-module is now namedeks-managed-node-groupand provisions a single AWS EKS Managed Node Group per sub-module definition (previous version utilizedfor_eachto create 0 or more node groups)- Additional changes for the
eks-managed-node-groupsub-module over the previousnode_groupsmodule include:- Variable name changes defined in section
Variable and output changesbelow - Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
- Support for nearly full control of the security group created, or provide the ID of an existing security group, has been added
- User data has been revamped and all user data logic moved to the
_user_datainternal sub-module; the localuserdata.sh.tplhas been removed entirely
- Variable name changes defined in section
- Additional changes for the
- The previous
fargatesub-module is now namedfargate-profileand provisions a single AWS EKS Fargate Profile per sub-module definition (previous version utilizedfor_eachto create 0 or more profiles)- Additional changes for the
fargate-profilesub-module over the previousfargatemodule include:- Variable name changes defined in section
Variable and output changesbelow - Support for nearly full control of the IAM role created, or provide the ARN of an existing IAM role, has been added
- Similar to the
eks_managed_node_group_defaultsandself_managed_node_group_defaults, afargate_profile_defaultshas been provided to allow users to control the default configurations for the Fargate profiles created
- Variable name changes defined in section
- Additional changes for the
- A sub-module for
self-managed-node-grouphas been created and provisions a single self managed node group (autoscaling group) per sub-module definition- Additional changes for the
self-managed-node-groupsub-module over the previousnode_groupsvariable include:- The underlying autoscaling group and launch template have been updated to more closely match that of the
terraform-aws-autoscalingmodule and the features it offers - The previous iteration used a count over a list of node group definitions which was prone to disruptive updates; this is now replaced with a map/for_each to align with that of the EKS managed node group and Fargate profile behaviors/style
- The underlying autoscaling group and launch template have been updated to more closely match that of the
- Additional changes for the
- The previous
- The user data configuration supported across the module has been completely revamped. A new
_user_datainternal sub-module has been created to consolidate all user data configuration in one location which provides better support for testability (via thetests/user-dataexample). The new sub-module supports nearly all possible combinations including the ability to allow users to provide their own user data template which will be rendered by the module. See thetests/user-dataexample project for the full plethora of example configuration possibilities and more details on the logic of the design can be found in themodules/_user_datadirectory. - Resource name changes may cause issues with existing resources. For example, security groups and IAM roles cannot be renamed, they must be recreated. Recreation of these resources may also trigger a recreation of the cluster. To use the legacy (< 18.x) resource naming convention, set
prefix_separatorto "". - Security group usage has been overhauled to provide only the bare minimum network connectivity required to launch a bare bones cluster. See the security group documentation section for more details. Users upgrading to v18.x will want to review the rules they have in place today versus the rules provisioned by the v18.x module and ensure to make any necessary adjustments for their specific workload.
Additional changes
Added
- Support for AWS EKS Addons has been added
- Support for AWS EKS Cluster Identity Provider Configuration has been added
- AWS Terraform provider minimum required version has been updated to 3.64 to support the changes made and additional resources supported
- An example
user_dataproject has been added to aid in demonstrating, testing, and validating the various methods of configuring user data with the_user_datasub-module as well as the rooteksmodule - Template for rendering the aws-auth configmap output -
aws_auth_cm.tpl - Template for Bottlerocket OS user data bootstrapping -
bottlerocket_user_data.tpl
Modified
- The previous
fargateexample has been renamed tofargate_profile - The previous
irsaandinstance_refreshexamples have been merged into one exampleirsa_autoscale_refresh - The previous
managed_node_groupsexample has been renamed toself_managed_node_group - The previously hardcoded EKS OIDC root CA thumbprint value and variable has been replaced with a
tls_certificatedata source that refers to the cluster OIDC issuer url. Thumbprint values should remain unchanged however - Individual cluster security group resources have been replaced with a single security group resource that takes a map of rules as input. The default ingress/egress rules have had their scope reduced in order to provide the bare minimum of access to permit successful cluster creation and allow users to opt in to any additional network access as needed for a better security posture. This means the
0.0.0.0/0egress rule has been removed, instead TCP/443 and TCP/10250 egress rules to the node group security group are used instead - The Linux/bash user data template has been updated to include the bare minimum necessary for bootstrapping AWS EKS Optimized AMI derivative nodes with provisions for providing additional user data and configurations; was named
userdata.sh.tpland is now namedlinux_user_data.tpl - The Windows user data template has been renamed from
userdata_windows.tpltowindows_user_data.tpl
Removed
- Miscellaneous documents on how to configure Kubernetes cluster internals have been removed. Documentation related to how to configure the AWS EKS Cluster and its supported infrastructure resources provided by the module are supported, while cluster internal configuration is out of scope for this project
- The previous
bottlerocketexample has been removed in favor of demonstrating the use and configuration of Bottlerocket nodes via the respectiveeks_managed_node_groupandself_managed_node_groupexamples - The previous
launch_templateandlaunch_templates_with_managed_node_groupsexamples have been removed; only launch templates are now supported (default) and launch configuration support has been removed - The previous
secrets_encryptionexample has been removed; the functionality has been demonstrated in several of the new examples rendering this standalone example redundant - The additional, custom IAM role policy for the cluster role has been removed. The permissions are either now provided in the attached managed AWS permission policies used or are no longer required
- The
kubeconfig.tpltemplate; kubeconfig management is no longer supported under this module - The HTTP Terraform provider (forked copy) dependency has been removed
Variable and output changes
-
Removed variables:
cluster_create_timeout,cluster_update_timeout, andcluster_delete_timeouthave been replaced withcluster_timeoutskubeconfig_namekubeconfig_output_pathkubeconfig_file_permissionkubeconfig_api_versionkubeconfig_aws_authenticator_commandkubeconfig_aws_authenticator_command_argskubeconfig_aws_authenticator_additional_argskubeconfig_aws_authenticator_env_variableswrite_kubeconfigdefault_platformmanage_aws_authaws_auth_additional_labelsmap_accountsmap_rolesmap_usersfargate_subnetsworker_groups_launch_templateworker_security_group_idworker_ami_name_filterworker_ami_name_filter_windowsworker_ami_owner_idworker_ami_owner_id_windowsworker_additional_security_group_idsworker_sg_ingress_from_portworkers_additional_policiesworker_create_security_groupworker_create_initial_lifecycle_hooksworker_create_cluster_primary_security_group_rulescluster_create_endpoint_private_access_sg_rulecluster_endpoint_private_access_cidrscluster_endpoint_private_access_sgmanage_worker_iam_resourcesworkers_role_nameattach_worker_cni_policyeks_oidc_root_ca_thumbprintcreate_fargate_pod_execution_rolefargate_pod_execution_role_namecluster_egress_cidrsworkers_egress_cidrswait_for_cluster_timeout- EKS Managed Node Group sub-module (was
node_groups)default_iam_role_arnworkers_group_defaultsworker_security_group_idnode_groups_defaultsnode_groupsebs_optimized_not_supported
- Fargate profile sub-module (was
fargate)create_eksandcreate_fargate_pod_execution_rolehave been replaced with simplycreate
-
Renamed variables:
create_eks->createsubnets->subnet_idscluster_create_security_group->create_cluster_security_groupcluster_log_retention_in_days->cloudwatch_log_group_retention_in_dayscluster_log_kms_key_id->cloudwatch_log_group_kms_key_idmanage_cluster_iam_resources->create_iam_rolecluster_iam_role_name->iam_role_namepermissions_boundary->iam_role_permissions_boundaryiam_path->iam_role_pathpre_userdata->pre_bootstrap_user_dataadditional_userdata->post_bootstrap_user_dataworker_groups->self_managed_node_groupsworkers_group_defaults->self_managed_node_group_defaultsnode_groups->eks_managed_node_groupsnode_groups_defaults->eks_managed_node_group_defaults- EKS Managed Node Group sub-module (was
node_groups)create_eks->createworker_additional_security_group_ids->vpc_security_group_ids
- Fargate profile sub-module
fargate_pod_execution_role_name->namecreate_fargate_pod_execution_role->create_iam_rolesubnets->subnet_idsiam_path->iam_role_pathpermissions_boundary->iam_role_permissions_boundary
-
Added variables:
cluster_additional_security_group_idsadded to allow users to add additional security groups to the cluster as neededcluster_security_group_namecluster_security_group_use_name_prefixadded to allow users to use either the name as specified or default to using the name specified as a prefixcluster_security_group_descriptioncluster_security_group_additional_rulescluster_security_group_tagscreate_cloudwatch_log_groupadded in place of the logic that checked if any cluster log types were enabled to allow users to opt in as they see fitcreate_node_security_groupadded to create single security group that connects node groups and cluster in central locationnode_security_group_idnode_security_group_namenode_security_group_use_name_prefixnode_security_group_descriptionnode_security_group_additional_rulesnode_security_group_tagsiam_role_arniam_role_use_name_prefixiam_role_descriptioniam_role_additional_policiesiam_role_tagscluster_addonscluster_identity_providersfargate_profile_defaultsprefix_separatoradded to support legacy behavior of not having a prefix separator- EKS Managed Node Group sub-module (was
node_groups)platformenable_bootstrap_user_datapre_bootstrap_user_datapost_bootstrap_user_databootstrap_extra_argsuser_data_template_pathcreate_launch_templatelaunch_template_namelaunch_template_use_name_prefixdescriptionebs_optimizedami_idkey_namelaunch_template_default_versionupdate_launch_template_default_versiondisable_api_terminationkernel_idram_disk_idblock_device_mappingscapacity_reservation_specificationcpu_optionscredit_specificationelastic_gpu_specificationselastic_inference_acceleratorenclave_optionsinstance_market_optionslicense_specificationsmetadata_optionsenable_monitoringnetwork_interfacesplacementmin_sizemax_sizedesired_sizeuse_name_prefixami_typeami_release_versioncapacity_typedisk_sizeforce_update_versioninstance_typeslabelscluster_versionlaunch_template_versionremote_accesstaintsupdate_configtimeoutscreate_security_groupsecurity_group_namesecurity_group_use_name_prefixsecurity_group_descriptionvpc_idsecurity_group_rulescluster_security_group_idsecurity_group_tagscreate_iam_roleiam_role_arniam_role_nameiam_role_use_name_prefixiam_role_pathiam_role_descriptioniam_role_permissions_boundaryiam_role_additional_policiesiam_role_tags
- Fargate profile sub-module (was
fargate)iam_role_arn(for ifcreate_iam_roleisfalseto bring your own externally created role)iam_role_nameiam_role_use_name_prefixiam_role_descriptioniam_role_additional_policiesiam_role_tagsselectorstimeouts
-
Removed outputs:
cluster_versionkubeconfigkubeconfig_filenameworkers_asg_arnsworkers_asg_namesworkers_user_dataworkers_default_ami_idworkers_default_ami_id_windowsworkers_launch_template_idsworkers_launch_template_arnsworkers_launch_template_latest_versionsworker_security_group_idworker_iam_instance_profile_arnsworker_iam_instance_profile_namesworker_iam_role_nameworker_iam_role_arnfargate_profile_idsfargate_profile_arnsfargate_iam_role_namefargate_iam_role_arnnode_groupssecurity_group_rule_cluster_https_worker_ingress- EKS Managed Node Group sub-module (was
node_groups)node_groupsaws_auth_roles
- Fargate profile sub-module (was
fargate)aws_auth_roles
-
Renamed outputs:
config_map_aws_auth->aws_auth_configmap_yaml- Fargate profile sub-module (was
fargate)fargate_profile_ids->fargate_profile_idfargate_profile_arns->fargate_profile_arn
-
Added outputs:
cluster_platform_versioncluster_statuscluster_security_group_arncluster_security_group_idnode_security_group_arnnode_security_group_idcluster_iam_role_unique_idcluster_addonscluster_identity_providersfargate_profileseks_managed_node_groupsself_managed_node_groups- EKS Managed Node Group sub-module (was
node_groups)launch_template_idlaunch_template_arnlaunch_template_latest_versionnode_group_arnnode_group_idnode_group_resourcesnode_group_statussecurity_group_arnsecurity_group_idiam_role_nameiam_role_arniam_role_unique_id
- Fargate profile sub-module (was
fargate)iam_role_unique_idfargate_profile_status
Upgrade Migrations
Before 17.x Example
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 17.0"
cluster_name = local.name
cluster_version = local.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
vpc_id = module.vpc.vpc_id
subnets = module.vpc.private_subnets
# Managed Node Groups
node_groups_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}
node_groups = {
node_group = {
min_capacity = 1
max_capacity = 10
desired_capacity = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
update_config = {
max_unavailable_percentage = 50
}
k8s_labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
taints = [
{
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
]
additional_tags = {
ExtraTag = "example"
}
}
}
# Worker groups
worker_additional_security_group_ids = [aws_security_group.additional.id]
worker_groups_launch_template = [
{
name = "worker-group"
override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]
spot_instance_pools = 4
asg_max_size = 5
asg_desired_capacity = 2
kubelet_extra_args = "--node-labels=node.kubernetes.io/lifecycle=spot"
public_ip = true
},
]
# Fargate
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
tags = {
Owner = "test"
}
timeouts = {
create = "20m"
delete = "20m"
}
}
}
tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
After 18.x Example
module "cluster_after" {
source = "terraform-aws-modules/eks/aws"
version = "~> 18.0"
cluster_name = local.name
cluster_version = local.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}
eks_managed_node_groups = {
node_group = {
min_size = 1
max_size = 10
desired_size = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
update_config = {
max_unavailable_percentage = 50
}
labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
taints = [
{
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
]
tags = {
ExtraTag = "example"
}
}
}
self_managed_node_group_defaults = {
vpc_security_group_ids = [aws_security_group.additional.id]
}
self_managed_node_groups = {
worker_group = {
name = "worker-group"
min_size = 1
max_size = 5
desired_size = 2
instance_type = "m4.large"
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
delete_on_termination = true
encrypted = false
volume_size = 100
volume_type = "gp2"
}
}
}
use_mixed_instances_policy = true
mixed_instances_policy = {
instances_distribution = {
spot_instance_pools = 4
}
override = [
{ instance_type = "m5.large" },
{ instance_type = "m5a.large" },
{ instance_type = "m5d.large" },
{ instance_type = "m5ad.large" },
]
}
}
}
# Fargate
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
tags = {
Owner = "test"
}
timeouts = {
create = "20m"
delete = "20m"
}
}
}
tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
Diff of before <> after
module "eks" {
source = "terraform-aws-modules/eks/aws"
- version = "~> 17.0"
+ version = "~> 18.0"
cluster_name = local.name
cluster_version = local.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
vpc_id = module.vpc.vpc_id
- subnets = module.vpc.private_subnets
+ subnet_ids = module.vpc.private_subnets
- # Managed Node Groups
- node_groups_defaults = {
+ eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
}
- node_groups = {
+ eks_managed_node_groups = {
node_group = {
- min_capacity = 1
- max_capacity = 10
- desired_capacity = 1
+ min_size = 1
+ max_size = 10
+ desired_size = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
update_config = {
max_unavailable_percentage = 50
}
- k8s_labels = {
+ labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
taints = [
{
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
]
- additional_tags = {
+ tags = {
ExtraTag = "example"
}
}
}
- # Worker groups
- worker_additional_security_group_ids = [aws_security_group.additional.id]
-
- worker_groups_launch_template = [
- {
- name = "worker-group"
- override_instance_types = ["m5.large", "m5a.large", "m5d.large", "m5ad.large"]
- spot_instance_pools = 4
- asg_max_size = 5
- asg_desired_capacity = 2
- kubelet_extra_args = "--node-labels=node.kubernetes.io/lifecycle=spot"
- public_ip = true
- },
- ]
+ self_managed_node_group_defaults = {
+ vpc_security_group_ids = [aws_security_group.additional.id]
+ }
+
+ self_managed_node_groups = {
+ worker_group = {
+ name = "worker-group"
+
+ min_size = 1
+ max_size = 5
+ desired_size = 2
+ instance_type = "m4.large"
+
+ bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
+
+ block_device_mappings = {
+ xvda = {
+ device_name = "/dev/xvda"
+ ebs = {
+ delete_on_termination = true
+ encrypted = false
+ volume_size = 100
+ volume_type = "gp2"
+ }
+
+ }
+ }
+
+ use_mixed_instances_policy = true
+ mixed_instances_policy = {
+ instances_distribution = {
+ spot_instance_pools = 4
+ }
+
+ override = [
+ { instance_type = "m5.large" },
+ { instance_type = "m5a.large" },
+ { instance_type = "m5d.large" },
+ { instance_type = "m5ad.large" },
+ ]
+ }
+ }
+ }
# Fargate
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
tags = {
Owner = "test"
}
timeouts = {
create = "20m"
delete = "20m"
}
}
}
tags = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
Attaching an IAM role policy to a Fargate profile
Before 17.x
resource "aws_iam_role_policy_attachment" "default" {
role = module.eks.fargate_iam_role_name
policy_arn = aws_iam_policy.default.arn
}
After 18.x
# Attach the policy to an "example" Fargate profile
resource "aws_iam_role_policy_attachment" "default" {
role = module.eks.fargate_profiles["example"].iam_role_name
policy_arn = aws_iam_policy.default.arn
}
Or:
# Attach the policy to all Fargate profiles
resource "aws_iam_role_policy_attachment" "default" {
for_each = module.eks.fargate_profiles
role = each.value.iam_role_name
policy_arn = aws_iam_policy.default.arn
}