feat!: Replace the use of aws-auth configmap with EKS cluster access entry (#2858)

* feat: Replace `resolve_conflicts` with `resolve_conflicts_on_create`/`delete`; raise MSV of AWS provider to `v5.0` to support

* fix: Replace dynamic DNS suffix for `sts:AssumeRole` API calls for static suffix

* feat: Add module tag

* feat: Align Karpenter permissions with Karpenter v1beta1/v0.32 permissions from upstream

* refactor: Move `aws-auth` ConfigMap functionality to its own sub-module

* chore: Update examples

* feat: Add state `moved` block for Karpenter Pod Identity role re-name

* fix: Correct variable `create` description

* feat: Add support for cluster access entries

* chore: Bump MSV of Terraform to `1.3`

* fix: Replace defunct kubectl provider with an updated forked equivalent

* chore: Update and validate examples for access entry; clean up provider usage

* docs: Correct double redundant variable descriptions

* feat: Add support for Cloudwatch log group class argument

* fix: Update usage tag placement, fix Karpenter event spelling, add upcoming changes section to upgrade guide

* feat: Update Karpenter module to generalize naming used and align policy with the upstream Karpenter policy

* feat: Add native support for Windows based managed nodegroups similar to AL2 and Bottlerocket

* feat: Update self-managed nodegroup module to use latest features of ASG

* docs: Update and simplify docs

* fix: Correct variable description for AMI types

* fix: Update upgrade guide with changes; rename Karpenter controller resource names to support migrating for users

* docs: Complete upgrade guide docs for migration and changes applied

* Update examples/karpenter/README.md

Co-authored-by: Anton Babenko <anton@antonbabenko.com>

* Update examples/outposts/README.md

Co-authored-by: Anton Babenko <anton@antonbabenko.com>

* Update modules/karpenter/README.md

Co-authored-by: Anton Babenko <anton@antonbabenko.com>

---------

Co-authored-by: Anton Babenko <anton@antonbabenko.com>
This commit is contained in:
Bryant Biggs
2024-02-02 09:36:25 -05:00
committed by GitHub
parent 2cb1fac31b
commit 6b40bdbb1d
71 changed files with 1809 additions and 2136 deletions

View File

@@ -1,107 +0,0 @@
# Complete AWS EKS Cluster
Configuration in this directory creates an AWS EKS cluster with a broad mix of various features and settings provided by this module:
- AWS EKS cluster
- Disabled EKS cluster
- Self managed node group
- Externally attached self managed node group
- Disabled self managed node group
- EKS managed node group
- Externally attached EKS managed node group
- Disabled self managed node group
- Fargate profile
- Externally attached Fargate profile
- Disabled Fargate profile
- Cluster addons: CoreDNS, Kube-Proxy, and VPC-CNI
- IAM roles for service accounts
## Usage
To run this example you need to execute:
```bash
$ terraform init
$ terraform plan
$ terraform apply
```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| <a name="module_disabled_eks"></a> [disabled\_eks](#module\_disabled\_eks) | ../.. | n/a |
| <a name="module_disabled_eks_managed_node_group"></a> [disabled\_eks\_managed\_node\_group](#module\_disabled\_eks\_managed\_node\_group) | ../../modules/eks-managed-node-group | n/a |
| <a name="module_disabled_fargate_profile"></a> [disabled\_fargate\_profile](#module\_disabled\_fargate\_profile) | ../../modules/fargate-profile | n/a |
| <a name="module_disabled_self_managed_node_group"></a> [disabled\_self\_managed\_node\_group](#module\_disabled\_self\_managed\_node\_group) | ../../modules/self-managed-node-group | n/a |
| <a name="module_eks"></a> [eks](#module\_eks) | ../.. | n/a |
| <a name="module_eks_managed_node_group"></a> [eks\_managed\_node\_group](#module\_eks\_managed\_node\_group) | ../../modules/eks-managed-node-group | n/a |
| <a name="module_fargate_profile"></a> [fargate\_profile](#module\_fargate\_profile) | ../../modules/fargate-profile | n/a |
| <a name="module_kms"></a> [kms](#module\_kms) | terraform-aws-modules/kms/aws | ~> 1.5 |
| <a name="module_self_managed_node_group"></a> [self\_managed\_node\_group](#module\_self\_managed\_node\_group) | ../../modules/self-managed-node-group | n/a |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 4.0 |
## Resources
| Name | Type |
|------|------|
| [aws_iam_policy.additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_security_group.additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) | data source |
| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
## Inputs
No inputs.
## Outputs
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |
| <a name="output_cluster_arn"></a> [cluster\_arn](#output\_cluster\_arn) | The Amazon Resource Name (ARN) of the cluster |
| <a name="output_cluster_certificate_authority_data"></a> [cluster\_certificate\_authority\_data](#output\_cluster\_certificate\_authority\_data) | Base64 encoded certificate data required to communicate with the cluster |
| <a name="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint) | Endpoint for your Kubernetes API server |
| <a name="output_cluster_iam_role_arn"></a> [cluster\_iam\_role\_arn](#output\_cluster\_iam\_role\_arn) | IAM role ARN of the EKS cluster |
| <a name="output_cluster_iam_role_name"></a> [cluster\_iam\_role\_name](#output\_cluster\_iam\_role\_name) | IAM role name of the EKS cluster |
| <a name="output_cluster_iam_role_unique_id"></a> [cluster\_iam\_role\_unique\_id](#output\_cluster\_iam\_role\_unique\_id) | Stable and unique string identifying the IAM role |
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | The ID of the EKS cluster. Note: currently a value is returned only for local EKS clusters created on Outposts |
| <a name="output_cluster_identity_providers"></a> [cluster\_identity\_providers](#output\_cluster\_identity\_providers) | Map of attribute maps for all EKS identity providers enabled |
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | The name of the EKS cluster |
| <a name="output_cluster_oidc_issuer_url"></a> [cluster\_oidc\_issuer\_url](#output\_cluster\_oidc\_issuer\_url) | The URL on the EKS cluster for the OpenID Connect identity provider |
| <a name="output_cluster_platform_version"></a> [cluster\_platform\_version](#output\_cluster\_platform\_version) | Platform version for the cluster |
| <a name="output_cluster_security_group_arn"></a> [cluster\_security\_group\_arn](#output\_cluster\_security\_group\_arn) | Amazon Resource Name (ARN) of the cluster security group |
| <a name="output_cluster_security_group_id"></a> [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id) | Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console |
| <a name="output_cluster_status"></a> [cluster\_status](#output\_cluster\_status) | Status of the EKS cluster. One of `CREATING`, `ACTIVE`, `DELETING`, `FAILED` |
| <a name="output_cluster_tls_certificate_sha1_fingerprint"></a> [cluster\_tls\_certificate\_sha1\_fingerprint](#output\_cluster\_tls\_certificate\_sha1\_fingerprint) | The SHA1 fingerprint of the public key of the cluster's certificate |
| <a name="output_eks_managed_node_groups"></a> [eks\_managed\_node\_groups](#output\_eks\_managed\_node\_groups) | Map of attribute maps for all EKS managed node groups created |
| <a name="output_eks_managed_node_groups_autoscaling_group_names"></a> [eks\_managed\_node\_groups\_autoscaling\_group\_names](#output\_eks\_managed\_node\_groups\_autoscaling\_group\_names) | List of the autoscaling group names created by EKS managed node groups |
| <a name="output_fargate_profiles"></a> [fargate\_profiles](#output\_fargate\_profiles) | Map of attribute maps for all EKS Fargate Profiles created |
| <a name="output_kms_key_arn"></a> [kms\_key\_arn](#output\_kms\_key\_arn) | The Amazon Resource Name (ARN) of the key |
| <a name="output_kms_key_id"></a> [kms\_key\_id](#output\_kms\_key\_id) | The globally unique identifier for the key |
| <a name="output_kms_key_policy"></a> [kms\_key\_policy](#output\_kms\_key\_policy) | The IAM resource policy set on the key |
| <a name="output_oidc_provider"></a> [oidc\_provider](#output\_oidc\_provider) | The OpenID Connect identity provider (issuer URL without leading `https://`) |
| <a name="output_oidc_provider_arn"></a> [oidc\_provider\_arn](#output\_oidc\_provider\_arn) | The ARN of the OIDC Provider if `enable_irsa = true` |
| <a name="output_self_managed_node_groups"></a> [self\_managed\_node\_groups](#output\_self\_managed\_node\_groups) | Map of attribute maps for all self managed node groups created |
| <a name="output_self_managed_node_groups_autoscaling_group_names"></a> [self\_managed\_node\_groups\_autoscaling\_group\_names](#output\_self\_managed\_node\_groups\_autoscaling\_group\_names) | List of the autoscaling group names created by self-managed node groups |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

View File

@@ -1,482 +0,0 @@
provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
data "aws_availability_zones" "available" {}
data "aws_caller_identity" "current" {}
locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
################################################################################
# EKS Module
################################################################################
module "eks" {
source = "../.."
cluster_name = local.name
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
preserve = true
most_recent = true
timeouts = {
create = "25m"
delete = "10m"
}
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
}
# External encryption key
create_kms_key = false
cluster_encryption_config = {
resources = ["secrets"]
provider_key_arn = module.kms.key_arn
}
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress_nodes_ephemeral_ports_tcp = {
description = "Nodes on ephemeral ports"
protocol = "tcp"
from_port = 1025
to_port = 65535
type = "ingress"
source_node_security_group = true
}
# Test: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/2319
ingress_source_security_group_id = {
description = "Ingress from another computed security group"
protocol = "tcp"
from_port = 22
to_port = 22
type = "ingress"
source_security_group_id = aws_security_group.additional.id
}
}
# Extend node-to-node security group rules
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
# Test: https://github.com/terraform-aws-modules/terraform-aws-eks/pull/2319
ingress_source_security_group_id = {
description = "Ingress from another computed security group"
protocol = "tcp"
from_port = 22
to_port = 22
type = "ingress"
source_security_group_id = aws_security_group.additional.id
}
}
# Self Managed Node Group(s)
self_managed_node_group_defaults = {
vpc_security_group_ids = [aws_security_group.additional.id]
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
instance_refresh = {
strategy = "Rolling"
preferences = {
min_healthy_percentage = 66
}
}
}
self_managed_node_groups = {
spot = {
instance_type = "m5.large"
instance_market_options = {
market_type = "spot"
}
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
}
}
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
attach_cluster_primary_security_group = true
vpc_security_group_ids = [aws_security_group.additional.id]
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
}
eks_managed_node_groups = {
blue = {}
green = {
min_size = 1
max_size = 10
desired_size = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
taints = {
dedicated = {
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
}
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = 100
volume_type = "gp3"
iops = 3000
throughput = 150
delete_on_termination = true
}
}
}
update_config = {
max_unavailable_percentage = 33 # or set `max_unavailable`
}
tags = {
ExtraTag = "example"
}
}
}
# Fargate Profile(s)
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
tags = {
Owner = "test"
}
timeouts = {
create = "20m"
delete = "20m"
}
}
}
# Create a new cluster where both an identity provider and Fargate profile is created
# will result in conflicts since only one can take place at a time
# # OIDC Identity provider
# cluster_identity_providers = {
# sts = {
# client_id = "sts.amazonaws.com"
# }
# }
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_node_iam_role_arns_non_windows = [
module.eks_managed_node_group.iam_role_arn,
module.self_managed_node_group.iam_role_arn,
]
aws_auth_fargate_profile_pod_execution_role_arns = [
module.fargate_profile.fargate_profile_pod_execution_role_arn
]
aws_auth_roles = [
{
rolearn = module.eks_managed_node_group.iam_role_arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes",
]
},
{
rolearn = module.self_managed_node_group.iam_role_arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes",
]
},
{
rolearn = module.fargate_profile.fargate_profile_pod_execution_role_arn
username = "system:node:{{SessionName}}"
groups = [
"system:bootstrappers",
"system:nodes",
"system:node-proxier",
]
}
]
aws_auth_users = [
{
userarn = "arn:aws:iam::66666666666:user/user1"
username = "user1"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::66666666666:user/user2"
username = "user2"
groups = ["system:masters"]
},
]
aws_auth_accounts = [
"777777777777",
"888888888888",
]
tags = local.tags
}
################################################################################
# Sub-Module Usage on Existing/Separate Cluster
################################################################################
module "eks_managed_node_group" {
source = "../../modules/eks-managed-node-group"
name = "separate-eks-mng"
cluster_name = module.eks.cluster_name
cluster_version = module.eks.cluster_version
subnet_ids = module.vpc.private_subnets
cluster_primary_security_group_id = module.eks.cluster_primary_security_group_id
vpc_security_group_ids = [
module.eks.cluster_security_group_id,
]
ami_type = "BOTTLEROCKET_x86_64"
platform = "bottlerocket"
# this will get added to what AWS provides
bootstrap_extra_args = <<-EOT
# extra args added
[settings.kernel]
lockdown = "integrity"
[settings.kubernetes.node-labels]
"label1" = "foo"
"label2" = "bar"
EOT
tags = merge(local.tags, { Separate = "eks-managed-node-group" })
}
module "self_managed_node_group" {
source = "../../modules/self-managed-node-group"
name = "separate-self-mng"
cluster_name = module.eks.cluster_name
cluster_version = module.eks.cluster_version
cluster_endpoint = module.eks.cluster_endpoint
cluster_auth_base64 = module.eks.cluster_certificate_authority_data
instance_type = "m5.large"
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [
module.eks.cluster_primary_security_group_id,
module.eks.cluster_security_group_id,
]
tags = merge(local.tags, { Separate = "self-managed-node-group" })
}
module "fargate_profile" {
source = "../../modules/fargate-profile"
name = "separate-fargate-profile"
cluster_name = module.eks.cluster_name
subnet_ids = module.vpc.private_subnets
selectors = [{
namespace = "kube-system"
}]
tags = merge(local.tags, { Separate = "fargate-profile" })
}
################################################################################
# Disabled creation
################################################################################
module "disabled_eks" {
source = "../.."
create = false
}
module "disabled_fargate_profile" {
source = "../../modules/fargate-profile"
create = false
}
module "disabled_eks_managed_node_group" {
source = "../../modules/eks-managed-node-group"
create = false
}
module "disabled_self_managed_node_group" {
source = "../../modules/self-managed-node-group"
create = false
}
################################################################################
# Supporting resources
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 4.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}
resource "aws_security_group" "additional" {
name_prefix = "${local.name}-additional"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
tags = merge(local.tags, { Name = "${local.name}-additional" })
}
resource "aws_iam_policy" "additional" {
name = "${local.name}-additional"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
module "kms" {
source = "terraform-aws-modules/kms/aws"
version = "~> 1.5"
aliases = ["eks/${local.name}"]
description = "${local.name} cluster encryption key"
enable_default_policy = true
key_owners = [data.aws_caller_identity.current.arn]
tags = local.tags
}

View File

@@ -1,192 +0,0 @@
################################################################################
# Cluster
################################################################################
output "cluster_arn" {
description = "The Amazon Resource Name (ARN) of the cluster"
value = module.eks.cluster_arn
}
output "cluster_certificate_authority_data" {
description = "Base64 encoded certificate data required to communicate with the cluster"
value = module.eks.cluster_certificate_authority_data
}
output "cluster_endpoint" {
description = "Endpoint for your Kubernetes API server"
value = module.eks.cluster_endpoint
}
output "cluster_id" {
description = "The ID of the EKS cluster. Note: currently a value is returned only for local EKS clusters created on Outposts"
value = module.eks.cluster_id
}
output "cluster_name" {
description = "The name of the EKS cluster"
value = module.eks.cluster_name
}
output "cluster_oidc_issuer_url" {
description = "The URL on the EKS cluster for the OpenID Connect identity provider"
value = module.eks.cluster_oidc_issuer_url
}
output "cluster_platform_version" {
description = "Platform version for the cluster"
value = module.eks.cluster_platform_version
}
output "cluster_status" {
description = "Status of the EKS cluster. One of `CREATING`, `ACTIVE`, `DELETING`, `FAILED`"
value = module.eks.cluster_status
}
output "cluster_security_group_id" {
description = "Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console"
value = module.eks.cluster_security_group_id
}
################################################################################
# KMS Key
################################################################################
output "kms_key_arn" {
description = "The Amazon Resource Name (ARN) of the key"
value = module.eks.kms_key_arn
}
output "kms_key_id" {
description = "The globally unique identifier for the key"
value = module.eks.kms_key_id
}
output "kms_key_policy" {
description = "The IAM resource policy set on the key"
value = module.eks.kms_key_policy
}
################################################################################
# Security Group
################################################################################
output "cluster_security_group_arn" {
description = "Amazon Resource Name (ARN) of the cluster security group"
value = module.eks.cluster_security_group_arn
}
################################################################################
# IRSA
################################################################################
output "oidc_provider" {
description = "The OpenID Connect identity provider (issuer URL without leading `https://`)"
value = module.eks.oidc_provider
}
output "oidc_provider_arn" {
description = "The ARN of the OIDC Provider if `enable_irsa = true`"
value = module.eks.oidc_provider_arn
}
output "cluster_tls_certificate_sha1_fingerprint" {
description = "The SHA1 fingerprint of the public key of the cluster's certificate"
value = module.eks.cluster_tls_certificate_sha1_fingerprint
}
################################################################################
# IAM Role
################################################################################
output "cluster_iam_role_name" {
description = "IAM role name of the EKS cluster"
value = module.eks.cluster_iam_role_name
}
output "cluster_iam_role_arn" {
description = "IAM role ARN of the EKS cluster"
value = module.eks.cluster_iam_role_arn
}
output "cluster_iam_role_unique_id" {
description = "Stable and unique string identifying the IAM role"
value = module.eks.cluster_iam_role_unique_id
}
################################################################################
# EKS Addons
################################################################################
output "cluster_addons" {
description = "Map of attribute maps for all EKS cluster addons enabled"
value = module.eks.cluster_addons
}
################################################################################
# EKS Identity Provider
################################################################################
output "cluster_identity_providers" {
description = "Map of attribute maps for all EKS identity providers enabled"
value = module.eks.cluster_identity_providers
}
################################################################################
# CloudWatch Log Group
################################################################################
output "cloudwatch_log_group_name" {
description = "Name of cloudwatch log group created"
value = module.eks.cloudwatch_log_group_name
}
output "cloudwatch_log_group_arn" {
description = "Arn of cloudwatch log group created"
value = module.eks.cloudwatch_log_group_arn
}
################################################################################
# Fargate Profile
################################################################################
output "fargate_profiles" {
description = "Map of attribute maps for all EKS Fargate Profiles created"
value = module.eks.fargate_profiles
}
################################################################################
# EKS Managed Node Group
################################################################################
output "eks_managed_node_groups" {
description = "Map of attribute maps for all EKS managed node groups created"
value = module.eks.eks_managed_node_groups
}
output "eks_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by EKS managed node groups"
value = module.eks.eks_managed_node_groups_autoscaling_group_names
}
################################################################################
# Self Managed Node Group
################################################################################
output "self_managed_node_groups" {
description = "Map of attribute maps for all self managed node groups created"
value = module.eks.self_managed_node_groups
}
output "self_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by self-managed node groups"
value = module.eks.self_managed_node_groups_autoscaling_group_names
}
################################################################################
# Additional
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
}

View File

@@ -1,14 +0,0 @@
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
}
}

View File

@@ -29,31 +29,31 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.34 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.34 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| <a name="module_ebs_kms_key"></a> [ebs\_kms\_key](#module\_ebs\_kms\_key) | terraform-aws-modules/kms/aws | ~> 1.5 |
| <a name="module_disabled_eks"></a> [disabled\_eks](#module\_disabled\_eks) | ../.. | n/a |
| <a name="module_disabled_eks_managed_node_group"></a> [disabled\_eks\_managed\_node\_group](#module\_disabled\_eks\_managed\_node\_group) | ../../modules/eks-managed-node-group | n/a |
| <a name="module_ebs_kms_key"></a> [ebs\_kms\_key](#module\_ebs\_kms\_key) | terraform-aws-modules/kms/aws | ~> 2.1 |
| <a name="module_eks"></a> [eks](#module\_eks) | ../.. | n/a |
| <a name="module_eks_managed_node_group"></a> [eks\_managed\_node\_group](#module\_eks\_managed\_node\_group) | ../../modules/eks-managed-node-group | n/a |
| <a name="module_key_pair"></a> [key\_pair](#module\_key\_pair) | terraform-aws-modules/key-pair/aws | ~> 2.0 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 4.0 |
| <a name="module_vpc_cni_irsa"></a> [vpc\_cni\_irsa](#module\_vpc\_cni\_irsa) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.0 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 5.0 |
## Resources
| Name | Type |
|------|------|
| [aws_autoscaling_group_tag.cluster_autoscaler_label_tags](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group_tag) | resource |
| [aws_iam_policy.node_additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_security_group.remote_access](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_ami.eks_default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami) | data source |
@@ -70,7 +70,7 @@ No inputs.
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_access_entries"></a> [access\_entries](#output\_access\_entries) | Map of access entries created and their attributes |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |

View File

@@ -2,18 +2,6 @@ provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
data "aws_caller_identity" "current" {}
data "aws_availability_zones" "available" {}
@@ -44,14 +32,7 @@ module "eks" {
cluster_endpoint_public_access = true
# IPV6
cluster_ip_family = "ipv6"
# We are using the IRSA created below for permissions
# However, we have to deploy with the policy attached FIRST (when creating a fresh cluster)
# and then turn this off after the cluster/node group is created. Without this initial policy,
# the VPC CNI fails to assign IPs and nodes cannot join the cluster
# See https://github.com/aws/containers-roadmap/issues/1666 for more context
# TODO - remove this policy once AWS releases a managed version similar to AmazonEKS_CNI_Policy (IPv4)
cluster_ip_family = "ipv6"
create_cni_ipv6_iam_policy = true
cluster_addons = {
@@ -62,9 +43,8 @@ module "eks" {
most_recent = true
}
vpc-cni = {
most_recent = true
before_compute = true
service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
most_recent = true
before_compute = true
configuration_values = jsonencode({
env = {
# Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
@@ -79,18 +59,9 @@ module "eks" {
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
manage_aws_auth_configmap = true
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
# We are using the IRSA created below for permissions
# However, we have to deploy with the policy attached FIRST (when creating a fresh cluster)
# and then turn this off after the cluster/node group is created. Without this initial policy,
# the VPC CNI fails to assign IPs and nodes cannot join the cluster
# See https://github.com/aws/containers-roadmap/issues/1666 for more context
iam_role_attach_cni_policy = true
}
eks_managed_node_groups = {
@@ -264,27 +235,6 @@ module "eks" {
additional = aws_iam_policy.node_additional.arn
}
schedules = {
scale-up = {
min_size = 2
max_size = "-1" # Retains current max size
desired_size = 2
start_time = "2023-03-05T00:00:00Z"
end_time = "2024-03-05T00:00:00Z"
time_zone = "Etc/GMT+0"
recurrence = "0 0 * * *"
},
scale-down = {
min_size = 0
max_size = "-1" # Retains current max size
desired_size = 0
start_time = "2023-03-05T12:00:00Z"
end_time = "2024-03-05T12:00:00Z"
time_zone = "Etc/GMT+0"
recurrence = "0 12 * * *"
}
}
tags = {
ExtraTag = "EKS managed node group complete example"
}
@@ -294,13 +244,59 @@ module "eks" {
tags = local.tags
}
module "disabled_eks" {
source = "../.."
create = false
}
################################################################################
# Sub-Module Usage on Existing/Separate Cluster
################################################################################
module "eks_managed_node_group" {
source = "../../modules/eks-managed-node-group"
name = "separate-eks-mng"
cluster_name = module.eks.cluster_name
cluster_version = module.eks.cluster_version
subnet_ids = module.vpc.private_subnets
cluster_primary_security_group_id = module.eks.cluster_primary_security_group_id
vpc_security_group_ids = [
module.eks.cluster_security_group_id,
]
ami_type = "BOTTLEROCKET_x86_64"
platform = "bottlerocket"
# this will get added to what AWS provides
bootstrap_extra_args = <<-EOT
# extra args added
[settings.kernel]
lockdown = "integrity"
[settings.kubernetes.node-labels]
"label1" = "foo"
"label2" = "bar"
EOT
tags = merge(local.tags, { Separate = "eks-managed-node-group" })
}
module "disabled_eks_managed_node_group" {
source = "../../modules/eks-managed-node-group"
create = false
}
################################################################################
# Supporting Resources
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 4.0"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
@@ -333,27 +329,9 @@ module "vpc" {
tags = local.tags
}
module "vpc_cni_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "~> 5.0"
role_name_prefix = "VPC-CNI-IRSA"
attach_vpc_cni_policy = true
vpc_cni_enable_ipv6 = true
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:aws-node"]
}
}
tags = local.tags
}
module "ebs_kms_key" {
source = "terraform-aws-modules/kms/aws"
version = "~> 1.5"
version = "~> 2.1"
description = "Customer managed key to encrypt EKS managed node group volumes"
@@ -458,52 +436,3 @@ data "aws_ami" "eks_default_bottlerocket" {
values = ["bottlerocket-aws-k8s-${local.cluster_version}-x86_64-*"]
}
}
################################################################################
# Tags for the ASG to support cluster-autoscaler scale up from 0
################################################################################
locals {
# We need to lookup K8s taint effect from the AWS API value
taint_effects = {
NO_SCHEDULE = "NoSchedule"
NO_EXECUTE = "NoExecute"
PREFER_NO_SCHEDULE = "PreferNoSchedule"
}
cluster_autoscaler_label_tags = merge([
for name, group in module.eks.eks_managed_node_groups : {
for label_name, label_value in coalesce(group.node_group_labels, {}) : "${name}|label|${label_name}" => {
autoscaling_group = group.node_group_autoscaling_group_names[0],
key = "k8s.io/cluster-autoscaler/node-template/label/${label_name}",
value = label_value,
}
}
]...)
cluster_autoscaler_taint_tags = merge([
for name, group in module.eks.eks_managed_node_groups : {
for taint in coalesce(group.node_group_taints, []) : "${name}|taint|${taint.key}" => {
autoscaling_group = group.node_group_autoscaling_group_names[0],
key = "k8s.io/cluster-autoscaler/node-template/taint/${taint.key}"
value = "${taint.value}:${local.taint_effects[taint.effect]}"
}
}
]...)
cluster_autoscaler_asg_tags = merge(local.cluster_autoscaler_label_tags, local.cluster_autoscaler_taint_tags)
}
resource "aws_autoscaling_group_tag" "cluster_autoscaler_label_tags" {
for_each = local.cluster_autoscaler_asg_tags
autoscaling_group_name = each.value.autoscaling_group
tag {
key = each.value.key
value = each.value.value
propagate_at_launch = false
}
}

View File

@@ -47,6 +47,15 @@ output "cluster_primary_security_group_id" {
value = module.eks.cluster_primary_security_group_id
}
################################################################################
# Access Entry
################################################################################
output "access_entries" {
description = "Map of access entries created and their attributes"
value = module.eks.access_entries
}
################################################################################
# KMS Key
################################################################################
@@ -200,12 +209,3 @@ output "self_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by self-managed node groups"
value = module.eks.self_managed_node_groups_autoscaling_group_names
}
################################################################################
# Additional
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
}

View File

@@ -1,14 +1,10 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
version = ">= 5.34"
}
}
}

View File

@@ -19,23 +19,23 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.7 |
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 3.0 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.34 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.34 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| <a name="module_disabled_fargate_profile"></a> [disabled\_fargate\_profile](#module\_disabled\_fargate\_profile) | ../../modules/fargate-profile | n/a |
| <a name="module_eks"></a> [eks](#module\_eks) | ../.. | n/a |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 4.0 |
| <a name="module_fargate_profile"></a> [fargate\_profile](#module\_fargate\_profile) | ../../modules/fargate-profile | n/a |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 5.0 |
## Resources
@@ -52,7 +52,7 @@ No inputs.
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_access_entries"></a> [access\_entries](#output\_access\_entries) | Map of access entries created and their attributes |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |

View File

@@ -6,7 +6,7 @@ data "aws_availability_zones" "available" {}
locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.27"
cluster_version = "1.29"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
@@ -54,59 +54,72 @@ module "eks" {
}
}
fargate_profiles = merge(
{
example = {
name = "example"
selectors = [
{
namespace = "backend"
labels = {
Application = "backend"
}
},
{
namespace = "app-*"
labels = {
Application = "app-wildcard"
}
fargate_profiles = {
example = {
name = "example"
selectors = [
{
namespace = "backend"
labels = {
Application = "backend"
}
},
{
namespace = "app-*"
labels = {
Application = "app-wildcard"
}
]
# Using specific subnets instead of the subnets supplied for the cluster itself
subnet_ids = [module.vpc.private_subnets[1]]
tags = {
Owner = "secondary"
}
]
timeouts = {
create = "20m"
delete = "20m"
}
}
},
{ for i in range(3) :
"kube-system-${element(split("-", local.azs[i]), 2)}" => {
selectors = [
{ namespace = "kube-system" }
]
# We want to create a profile per AZ for high availability
subnet_ids = [element(module.vpc.private_subnets, i)]
# Using specific subnets instead of the subnets supplied for the cluster itself
subnet_ids = [module.vpc.private_subnets[1]]
tags = {
Owner = "secondary"
}
}
)
kube-system = {
selectors = [
{ namespace = "kube-system" }
]
}
}
tags = local.tags
}
################################################################################
# Sub-Module Usage on Existing/Separate Cluster
################################################################################
module "fargate_profile" {
source = "../../modules/fargate-profile"
name = "separate-fargate-profile"
cluster_name = module.eks.cluster_name
subnet_ids = module.vpc.private_subnets
selectors = [{
namespace = "kube-system"
}]
tags = merge(local.tags, { Separate = "fargate-profile" })
}
module "disabled_fargate_profile" {
source = "../../modules/fargate-profile"
create = false
}
################################################################################
# Supporting Resources
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 4.0"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr

View File

@@ -47,6 +47,15 @@ output "cluster_primary_security_group_id" {
value = module.eks.cluster_primary_security_group_id
}
################################################################################
# Access Entry
################################################################################
output "access_entries" {
description = "Map of access entries created and their attributes"
value = module.eks.access_entries
}
################################################################################
# KMS Key
################################################################################
@@ -200,12 +209,3 @@ output "self_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by self-managed node groups"
value = module.eks.self_managed_node_groups_autoscaling_group_names
}
################################################################################
# Additional
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
}

View File

@@ -1,18 +1,10 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.7"
}
null = {
source = "hashicorp/null"
version = ">= 3.0"
version = ">= 5.34"
}
}
}

View File

@@ -41,6 +41,9 @@ kubectl delete node -l karpenter.sh/provisioner-name=default
2. Remove the resources created by Terraform
```bash
# Necessary to avoid removing Terraform's permissions too soon before its finished
# cleaning up the resources it deployed inside the clsuter
terraform state rm 'module.eks.aws_eks_access_entry.this["cluster_creator_admin"]' || true
terraform destroy
```
@@ -51,21 +54,19 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.34 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.7 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | >= 1.14 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 3.0 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | >= 2.0 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
| <a name="provider_aws.virginia"></a> [aws.virginia](#provider\_aws.virginia) | >= 4.57 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.34 |
| <a name="provider_aws.virginia"></a> [aws.virginia](#provider\_aws.virginia) | >= 5.34 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.7 |
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | >= 1.14 |
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | >= 2.0 |
## Modules
@@ -73,6 +74,7 @@ Note that this example may create resources which cost money. Run `terraform des
|------|--------|---------|
| <a name="module_eks"></a> [eks](#module\_eks) | ../.. | n/a |
| <a name="module_karpenter"></a> [karpenter](#module\_karpenter) | ../../modules/karpenter | n/a |
| <a name="module_karpenter_disabled"></a> [karpenter\_disabled](#module\_karpenter\_disabled) | ../../modules/karpenter | n/a |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 5.0 |
## Resources
@@ -80,9 +82,9 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Type |
|------|------|
| [helm_release.karpenter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [kubectl_manifest.karpenter_example_deployment](https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_class](https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_pool](https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_example_deployment](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_class](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.karpenter_node_pool](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones) | data source |
| [aws_ecrpublic_authorization_token.token](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token) | data source |
@@ -94,7 +96,7 @@ No inputs.
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_access_entries"></a> [access\_entries](#output\_access\_entries) | Map of access entries created and their attributes |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |
@@ -118,19 +120,19 @@ No inputs.
| <a name="output_eks_managed_node_groups_autoscaling_group_names"></a> [eks\_managed\_node\_groups\_autoscaling\_group\_names](#output\_eks\_managed\_node\_groups\_autoscaling\_group\_names) | List of the autoscaling group names created by EKS managed node groups |
| <a name="output_fargate_profiles"></a> [fargate\_profiles](#output\_fargate\_profiles) | Map of attribute maps for all EKS Fargate Profiles created |
| <a name="output_karpenter_event_rules"></a> [karpenter\_event\_rules](#output\_karpenter\_event\_rules) | Map of the event rules created and their attributes |
| <a name="output_karpenter_iam_role_arn"></a> [karpenter\_iam\_role\_arn](#output\_karpenter\_iam\_role\_arn) | The Amazon Resource Name (ARN) specifying the controller IAM role |
| <a name="output_karpenter_iam_role_name"></a> [karpenter\_iam\_role\_name](#output\_karpenter\_iam\_role\_name) | The name of the controller IAM role |
| <a name="output_karpenter_iam_role_unique_id"></a> [karpenter\_iam\_role\_unique\_id](#output\_karpenter\_iam\_role\_unique\_id) | Stable and unique string identifying the controller IAM role |
| <a name="output_karpenter_instance_profile_arn"></a> [karpenter\_instance\_profile\_arn](#output\_karpenter\_instance\_profile\_arn) | ARN assigned by AWS to the instance profile |
| <a name="output_karpenter_instance_profile_id"></a> [karpenter\_instance\_profile\_id](#output\_karpenter\_instance\_profile\_id) | Instance profile's ID |
| <a name="output_karpenter_instance_profile_name"></a> [karpenter\_instance\_profile\_name](#output\_karpenter\_instance\_profile\_name) | Name of the instance profile |
| <a name="output_karpenter_instance_profile_unique"></a> [karpenter\_instance\_profile\_unique](#output\_karpenter\_instance\_profile\_unique) | Stable and unique string identifying the IAM instance profile |
| <a name="output_karpenter_irsa_arn"></a> [karpenter\_irsa\_arn](#output\_karpenter\_irsa\_arn) | The Amazon Resource Name (ARN) specifying the IAM role for service accounts |
| <a name="output_karpenter_irsa_name"></a> [karpenter\_irsa\_name](#output\_karpenter\_irsa\_name) | The name of the IAM role for service accounts |
| <a name="output_karpenter_irsa_unique_id"></a> [karpenter\_irsa\_unique\_id](#output\_karpenter\_irsa\_unique\_id) | Stable and unique string identifying the IAM role for service accounts |
| <a name="output_karpenter_node_iam_role_arn"></a> [karpenter\_node\_iam\_role\_arn](#output\_karpenter\_node\_iam\_role\_arn) | The Amazon Resource Name (ARN) specifying the IAM role |
| <a name="output_karpenter_node_iam_role_name"></a> [karpenter\_node\_iam\_role\_name](#output\_karpenter\_node\_iam\_role\_name) | The name of the IAM role |
| <a name="output_karpenter_node_iam_role_unique_id"></a> [karpenter\_node\_iam\_role\_unique\_id](#output\_karpenter\_node\_iam\_role\_unique\_id) | Stable and unique string identifying the IAM role |
| <a name="output_karpenter_queue_arn"></a> [karpenter\_queue\_arn](#output\_karpenter\_queue\_arn) | The ARN of the SQS queue |
| <a name="output_karpenter_queue_name"></a> [karpenter\_queue\_name](#output\_karpenter\_queue\_name) | The name of the created Amazon SQS queue |
| <a name="output_karpenter_queue_url"></a> [karpenter\_queue\_url](#output\_karpenter\_queue\_url) | The URL for the created Amazon SQS queue |
| <a name="output_karpenter_role_arn"></a> [karpenter\_role\_arn](#output\_karpenter\_role\_arn) | The Amazon Resource Name (ARN) specifying the IAM role |
| <a name="output_karpenter_role_name"></a> [karpenter\_role\_name](#output\_karpenter\_role\_name) | The name of the IAM role |
| <a name="output_karpenter_role_unique_id"></a> [karpenter\_role\_unique\_id](#output\_karpenter\_role\_unique\_id) | Stable and unique string identifying the IAM role |
| <a name="output_node_security_group_arn"></a> [node\_security\_group\_arn](#output\_node\_security\_group\_arn) | Amazon Resource Name (ARN) of the node shared security group |
| <a name="output_node_security_group_id"></a> [node\_security\_group\_id](#output\_node\_security\_group\_id) | ID of the node shared security group |
| <a name="output_oidc_provider"></a> [oidc\_provider](#output\_oidc\_provider) | The OpenID Connect identity provider (issuer URL without leading `https://`) |

View File

@@ -7,18 +7,6 @@ provider "aws" {
alias = "virginia"
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
@@ -54,7 +42,7 @@ data "aws_ecrpublic_authorization_token" "token" {
locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.27"
cluster_version = "1.28"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
@@ -78,9 +66,11 @@ module "eks" {
cluster_version = local.cluster_version
cluster_endpoint_public_access = true
# Gives Terraform identity admin access to cluster which will
# allow deploying resources (Karpenter) into the cluster
enable_cluster_creator_admin_permissions = true
cluster_addons = {
kube-proxy = {}
vpc-cni = {}
coredns = {
configuration_values = jsonencode({
computeType = "Fargate"
@@ -106,6 +96,8 @@ module "eks" {
}
})
}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
@@ -116,19 +108,6 @@ module "eks" {
create_cluster_security_group = false
create_node_security_group = false
manage_aws_auth_configmap = true
aws_auth_roles = [
# We need to add in the Karpenter node IAM role for nodes launched by Karpenter
{
rolearn = module.karpenter.role_arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes",
]
},
]
fargate_profiles = {
karpenter = {
selectors = [
@@ -157,41 +136,51 @@ module "eks" {
module "karpenter" {
source = "../../modules/karpenter"
cluster_name = module.eks.cluster_name
cluster_name = module.eks.cluster_name
# EKS Fargate currently does not support Pod Identity
enable_irsa = true
irsa_oidc_provider_arn = module.eks.oidc_provider_arn
# In v0.32.0/v1beta1, Karpenter now creates the IAM instance profile
# so we disable the Terraform creation and add the necessary permissions for Karpenter IRSA
enable_karpenter_instance_profile_creation = true
# Used to attach additional IAM policies to the Karpenter node IAM role
iam_role_additional_policies = {
node_iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
tags = local.tags
}
resource "helm_release" "karpenter" {
namespace = "karpenter"
create_namespace = true
module "karpenter_disabled" {
source = "../../modules/karpenter"
create = false
}
################################################################################
# Karpenter Helm chart & manifests
# Not required; just to demonstrate functionality of the sub-module
################################################################################
resource "helm_release" "karpenter" {
namespace = "karpenter"
create_namespace = true
name = "karpenter"
repository = "oci://public.ecr.aws/karpenter"
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "v0.32.1"
version = "v0.33.1"
wait = false
values = [
<<-EOT
settings:
clusterName: ${module.eks.cluster_name}
clusterEndpoint: ${module.eks.cluster_endpoint}
interruptionQueueName: ${module.karpenter.queue_name}
interruptionQueue: ${module.karpenter.queue_name}
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: ${module.karpenter.irsa_arn}
eks.amazonaws.com/role-arn: ${module.karpenter.iam_role_arn}
EOT
]
}
@@ -204,7 +193,7 @@ resource "kubectl_manifest" "karpenter_node_class" {
name: default
spec:
amiFamily: AL2
role: ${module.karpenter.role_name}
role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}

View File

@@ -47,6 +47,15 @@ output "cluster_primary_security_group_id" {
value = module.eks.cluster_primary_security_group_id
}
################################################################################
# Access Entry
################################################################################
output "access_entries" {
description = "Map of access entries created and their attributes"
value = module.eks.access_entries
}
################################################################################
# Security Group
################################################################################
@@ -183,31 +192,22 @@ output "self_managed_node_groups_autoscaling_group_names" {
}
################################################################################
# Additional
# Karpenter controller IAM Role
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
output "karpenter_iam_role_name" {
description = "The name of the controller IAM role"
value = module.karpenter.iam_role_name
}
################################################################################
# IAM Role for Service Account (IRSA)
################################################################################
output "karpenter_irsa_name" {
description = "The name of the IAM role for service accounts"
value = module.karpenter.irsa_name
output "karpenter_iam_role_arn" {
description = "The Amazon Resource Name (ARN) specifying the controller IAM role"
value = module.karpenter.iam_role_arn
}
output "karpenter_irsa_arn" {
description = "The Amazon Resource Name (ARN) specifying the IAM role for service accounts"
value = module.karpenter.irsa_arn
}
output "karpenter_irsa_unique_id" {
description = "Stable and unique string identifying the IAM role for service accounts"
value = module.karpenter.irsa_unique_id
output "karpenter_iam_role_unique_id" {
description = "Stable and unique string identifying the controller IAM role"
value = module.karpenter.iam_role_unique_id
}
################################################################################
@@ -242,19 +242,19 @@ output "karpenter_event_rules" {
# Node IAM Role
################################################################################
output "karpenter_role_name" {
output "karpenter_node_iam_role_name" {
description = "The name of the IAM role"
value = module.karpenter.role_name
value = module.karpenter.node_iam_role_name
}
output "karpenter_role_arn" {
output "karpenter_node_iam_role_arn" {
description = "The Amazon Resource Name (ARN) specifying the IAM role"
value = module.karpenter.role_arn
value = module.karpenter.node_iam_role_arn
}
output "karpenter_role_unique_id" {
output "karpenter_node_iam_role_unique_id" {
description = "Stable and unique string identifying the IAM role"
value = module.karpenter.role_unique_id
value = module.karpenter.node_iam_role_unique_id
}
################################################################################

View File

@@ -1,26 +1,18 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
version = ">= 5.34"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.7"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.14"
}
null = {
source = "hashicorp/null"
version = ">= 3.0"
source = "alekc/kubectl"
version = ">= 2.0"
}
}
}

View File

@@ -36,21 +36,28 @@ $ terraform apply
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.
```bash
# Necessary to avoid removing Terraform's permissions too soon before its finished
# cleaning up the resources it deployed inside the clsuter
terraform state rm 'module.eks.aws_eks_access_entry.this["cluster_creator_admin"]' || true
terraform destroy
```
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.34 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.20 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
| <a name="provider_kubernetes"></a> [kubernetes](#provider\_kubernetes) | >= 2.10 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.34 |
| <a name="provider_kubernetes"></a> [kubernetes](#provider\_kubernetes) | >= 2.20 |
## Modules
@@ -80,7 +87,7 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_access_entries"></a> [access\_entries](#output\_access\_entries) | Map of access entries created and their attributes |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |

View File

@@ -2,21 +2,9 @@ provider "aws" {
region = var.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# Note: `cluster_id` is used with Outposts for auth
args = ["eks", "get-token", "--cluster-id", module.eks.cluster_id, "--region", var.region]
}
}
locals {
name = "ex-${basename(path.cwd)}"
cluster_version = "1.27" # Required by EKS on Outposts
cluster_version = "1.29"
outpost_arn = element(tolist(data.aws_outposts_outposts.this.arns), 0)
instance_type = element(tolist(data.aws_outposts_outpost_instance_types.this.instance_types), 0)
@@ -41,6 +29,10 @@ module "eks" {
cluster_endpoint_public_access = false # Not available on Outpost
cluster_endpoint_private_access = true
# Gives Terraform identity admin access to cluster which will
# allow deploying resources (EBS storage class) into the cluster
enable_cluster_creator_admin_permissions = true
vpc_id = data.aws_vpc.this.id
subnet_ids = data.aws_subnets.this.ids
@@ -49,9 +41,6 @@ module "eks" {
outpost_arns = [local.outpost_arn]
}
# Local clusters will automatically add the node group IAM role to the aws-auth configmap
manage_aws_auth_configmap = true
# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress_vpc_https = {

View File

@@ -47,6 +47,15 @@ output "cluster_primary_security_group_id" {
value = module.eks.cluster_primary_security_group_id
}
################################################################################
# Access Entry
################################################################################
output "access_entries" {
description = "Map of access entries created and their attributes"
value = module.eks.access_entries
}
################################################################################
# KMS Key
################################################################################
@@ -200,12 +209,3 @@ output "self_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by self-managed node groups"
value = module.eks.self_managed_node_groups_autoscaling_group_names
}
################################################################################
# Additional
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
}

View File

@@ -23,7 +23,7 @@ locals {
module "ssm_bastion_ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 4.2"
version = "~> 5.5"
name = "${local.name}-bastion"
@@ -56,7 +56,7 @@ module "ssm_bastion_ec2" {
rm terraform_${local.terraform_version}_linux_amd64.zip 2> /dev/null
# Install kubectl
curl -LO https://dl.k8s.io/release/v1.27.0/bin/linux/amd64/kubectl
curl -LO https://dl.k8s.io/release/v1.29.0/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Remove default awscli which is v1 - we want latest v2
@@ -80,7 +80,7 @@ module "ssm_bastion_ec2" {
module "bastion_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4.0"
version = "~> 5.0"
name = "${local.name}-bastion"
description = "Security group to allow provisioning ${local.name} EKS local cluster on Outposts"

View File

@@ -1,10 +1,10 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
version = ">= 5.34"
}
}
}

View File

@@ -1,14 +1,14 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
version = ">= 5.34"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
version = ">= 2.20"
}
}
}

View File

@@ -25,24 +25,25 @@ Note that this example may create resources which cost money. Run `terraform des
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.57 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 5.34 |
## Providers
| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.57 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 5.34 |
## Modules
| Name | Source | Version |
|------|--------|---------|
| <a name="module_ebs_kms_key"></a> [ebs\_kms\_key](#module\_ebs\_kms\_key) | terraform-aws-modules/kms/aws | ~> 1.5 |
| <a name="module_disabled_self_managed_node_group"></a> [disabled\_self\_managed\_node\_group](#module\_disabled\_self\_managed\_node\_group) | ../../modules/self-managed-node-group | n/a |
| <a name="module_ebs_kms_key"></a> [ebs\_kms\_key](#module\_ebs\_kms\_key) | terraform-aws-modules/kms/aws | ~> 2.0 |
| <a name="module_eks"></a> [eks](#module\_eks) | ../.. | n/a |
| <a name="module_key_pair"></a> [key\_pair](#module\_key\_pair) | terraform-aws-modules/key-pair/aws | ~> 2.0 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 4.0 |
| <a name="module_kms"></a> [kms](#module\_kms) | terraform-aws-modules/kms/aws | ~> 2.1 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 5.0 |
## Resources
@@ -62,7 +63,7 @@ No inputs.
| Name | Description |
|------|-------------|
| <a name="output_aws_auth_configmap_yaml"></a> [aws\_auth\_configmap\_yaml](#output\_aws\_auth\_configmap\_yaml) | Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles |
| <a name="output_access_entries"></a> [access\_entries](#output\_access\_entries) | Map of access entries created and their attributes |
| <a name="output_cloudwatch_log_group_arn"></a> [cloudwatch\_log\_group\_arn](#output\_cloudwatch\_log\_group\_arn) | Arn of cloudwatch log group created |
| <a name="output_cloudwatch_log_group_name"></a> [cloudwatch\_log\_group\_name](#output\_cloudwatch\_log\_group\_name) | Name of cloudwatch log group created |
| <a name="output_cluster_addons"></a> [cluster\_addons](#output\_cluster\_addons) | Map of attribute maps for all EKS cluster addons enabled |

View File

@@ -2,24 +2,12 @@ provider "aws" {
region = local.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
data "aws_caller_identity" "current" {}
data "aws_availability_zones" "available" {}
locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}"
cluster_version = "1.27"
cluster_version = "1.29"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
@@ -59,9 +47,12 @@ module "eks" {
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
# Self managed node groups will not automatically create the aws-auth configmap so we need to
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
# External encryption key
create_kms_key = false
cluster_encryption_config = {
resources = ["secrets"]
provider_key_arn = module.kms.key_arn
}
self_managed_node_group_defaults = {
# enable discovery of autoscaling groups by cluster-autoscaler
@@ -141,36 +132,6 @@ module "eks" {
}
}
efa = {
min_size = 1
max_size = 2
desired_size = 1
# aws ec2 describe-instance-types --region eu-west-1 --filters Name=network-info.efa-supported,Values=true --query "InstanceTypes[*].[InstanceType]" --output text | sort
instance_type = "c5n.9xlarge"
post_bootstrap_user_data = <<-EOT
# Install EFA
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
tar -xf aws-efa-installer-latest.tar.gz && cd aws-efa-installer
./efa_installer.sh -y --minimal
fi_info -p efa -t FI_EP_RDM
# Disable ptrace
sysctl -w kernel.yama.ptrace_scope=0
EOT
network_interfaces = [
{
description = "EFA interface example"
delete_on_termination = true
device_index = 0
associate_public_ip_address = false
interface_type = "efa"
}
]
}
# Complete
complete = {
name = "complete-self-mng"
@@ -287,12 +248,6 @@ module "eks" {
additional = aws_iam_policy.additional.arn
}
timeouts = {
create = "80m"
update = "80m"
delete = "80m"
}
tags = {
ExtraTag = "Self managed node group complete example"
}
@@ -302,13 +257,19 @@ module "eks" {
tags = local.tags
}
module "disabled_self_managed_node_group" {
source = "../../modules/self-managed-node-group"
create = false
}
################################################################################
# Supporting Resources
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 4.0"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
@@ -364,7 +325,7 @@ module "key_pair" {
module "ebs_kms_key" {
source = "terraform-aws-modules/kms/aws"
version = "~> 1.5"
version = "~> 2.0"
description = "Customer managed key to encrypt EKS managed node group volumes"
@@ -386,6 +347,18 @@ module "ebs_kms_key" {
tags = local.tags
}
module "kms" {
source = "terraform-aws-modules/kms/aws"
version = "~> 2.1"
aliases = ["eks/${local.name}"]
description = "${local.name} cluster encryption key"
enable_default_policy = true
key_owners = [data.aws_caller_identity.current.arn]
tags = local.tags
}
resource "aws_iam_policy" "additional" {
name = "${local.name}-additional"
description = "Example usage of node additional policy"

View File

@@ -47,6 +47,15 @@ output "cluster_primary_security_group_id" {
value = module.eks.cluster_primary_security_group_id
}
################################################################################
# Access Entry
################################################################################
output "access_entries" {
description = "Map of access entries created and their attributes"
value = module.eks.access_entries
}
################################################################################
# KMS Key
################################################################################
@@ -200,12 +209,3 @@ output "self_managed_node_groups_autoscaling_group_names" {
description = "List of the autoscaling group names created by self-managed node groups"
value = module.eks.self_managed_node_groups_autoscaling_group_names
}
################################################################################
# Additional
################################################################################
output "aws_auth_configmap_yaml" {
description = "Formatted yaml output for base aws-auth configmap containing roles used in cluster node groups/fargate profiles"
value = module.eks.aws_auth_configmap_yaml
}

View File

@@ -1,14 +1,10 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.57"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
version = ">= 5.34"
}
}
}

View File

@@ -17,7 +17,7 @@ $ terraform apply
| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
## Providers
@@ -35,6 +35,10 @@ No providers.
| <a name="module_eks_mng_linux_custom_ami"></a> [eks\_mng\_linux\_custom\_ami](#module\_eks\_mng\_linux\_custom\_ami) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_linux_custom_template"></a> [eks\_mng\_linux\_custom\_template](#module\_eks\_mng\_linux\_custom\_template) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_linux_no_op"></a> [eks\_mng\_linux\_no\_op](#module\_eks\_mng\_linux\_no\_op) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_windows_additional"></a> [eks\_mng\_windows\_additional](#module\_eks\_mng\_windows\_additional) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_windows_custom_ami"></a> [eks\_mng\_windows\_custom\_ami](#module\_eks\_mng\_windows\_custom\_ami) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_windows_custom_template"></a> [eks\_mng\_windows\_custom\_template](#module\_eks\_mng\_windows\_custom\_template) | ../../modules/_user_data | n/a |
| <a name="module_eks_mng_windows_no_op"></a> [eks\_mng\_windows\_no\_op](#module\_eks\_mng\_windows\_no\_op) | ../../modules/_user_data | n/a |
| <a name="module_self_mng_bottlerocket_bootstrap"></a> [self\_mng\_bottlerocket\_bootstrap](#module\_self\_mng\_bottlerocket\_bootstrap) | ../../modules/_user_data | n/a |
| <a name="module_self_mng_bottlerocket_custom_template"></a> [self\_mng\_bottlerocket\_custom\_template](#module\_self\_mng\_bottlerocket\_custom\_template) | ../../modules/_user_data | n/a |
| <a name="module_self_mng_bottlerocket_no_op"></a> [self\_mng\_bottlerocket\_no\_op](#module\_self\_mng\_bottlerocket\_no\_op) | ../../modules/_user_data | n/a |
@@ -65,6 +69,10 @@ No inputs.
| <a name="output_eks_mng_linux_custom_ami"></a> [eks\_mng\_linux\_custom\_ami](#output\_eks\_mng\_linux\_custom\_ami) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_linux_custom_template"></a> [eks\_mng\_linux\_custom\_template](#output\_eks\_mng\_linux\_custom\_template) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_linux_no_op"></a> [eks\_mng\_linux\_no\_op](#output\_eks\_mng\_linux\_no\_op) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_windows_additional"></a> [eks\_mng\_windows\_additional](#output\_eks\_mng\_windows\_additional) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_windows_custom_ami"></a> [eks\_mng\_windows\_custom\_ami](#output\_eks\_mng\_windows\_custom\_ami) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_windows_custom_template"></a> [eks\_mng\_windows\_custom\_template](#output\_eks\_mng\_windows\_custom\_template) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_eks_mng_windows_no_op"></a> [eks\_mng\_windows\_no\_op](#output\_eks\_mng\_windows\_no\_op) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_self_mng_bottlerocket_bootstrap"></a> [self\_mng\_bottlerocket\_bootstrap](#output\_self\_mng\_bottlerocket\_bootstrap) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_self_mng_bottlerocket_custom_template"></a> [self\_mng\_bottlerocket\_custom\_template](#output\_self\_mng\_bottlerocket\_custom\_template) | Base64 decoded user data rendered for the provided inputs |
| <a name="output_self_mng_bottlerocket_no_op"></a> [self\_mng\_bottlerocket\_no\_op](#output\_self\_mng\_bottlerocket\_no\_op) | Base64 decoded user data rendered for the provided inputs |

View File

@@ -121,6 +121,69 @@ module "eks_mng_bottlerocket_custom_template" {
EOT
}
# EKS managed node group - windows
module "eks_mng_windows_no_op" {
source = "../../modules/_user_data"
platform = "windows"
}
module "eks_mng_windows_additional" {
source = "../../modules/_user_data"
platform = "windows"
pre_bootstrap_user_data = <<-EOT
[string]$Something = 'IDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
}
module "eks_mng_windows_custom_ami" {
source = "../../modules/_user_data"
platform = "windows"
cluster_name = local.name
cluster_endpoint = local.cluster_endpoint
cluster_auth_base64 = local.cluster_auth_base64
enable_bootstrap_user_data = true
pre_bootstrap_user_data = <<-EOT
[string]$Something = 'IDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
# I don't know if this is the right way on Windows, but its just a string check here anyways
bootstrap_extra_args = "-KubeletExtraArgs --node-labels=node.kubernetes.io/lifecycle=spot"
post_bootstrap_user_data = <<-EOT
[string]$Something = 'IStillDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
}
module "eks_mng_windows_custom_template" {
source = "../../modules/_user_data"
platform = "windows"
cluster_name = local.name
cluster_endpoint = local.cluster_endpoint
cluster_auth_base64 = local.cluster_auth_base64
enable_bootstrap_user_data = true
user_data_template_path = "${path.module}/templates/windows_custom.tpl"
pre_bootstrap_user_data = <<-EOT
[string]$Something = 'IDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
# I don't know if this is the right way on Windows, but its just a string check here anyways
bootstrap_extra_args = "-KubeletExtraArgs --node-labels=node.kubernetes.io/lifecycle=spot"
post_bootstrap_user_data = <<-EOT
[string]$Something = 'IStillDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
}
# Self managed node group - linux
module "self_mng_linux_no_op" {
source = "../../modules/_user_data"
@@ -247,7 +310,7 @@ module "self_mng_windows_bootstrap" {
pre_bootstrap_user_data = <<-EOT
[string]$Something = 'IDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
# I don't know if this is the right way on WindowsOS, but its just a string check here anyways
# I don't know if this is the right way on Windows, but its just a string check here anyways
bootstrap_extra_args = "-KubeletExtraArgs --node-labels=node.kubernetes.io/lifecycle=spot"
post_bootstrap_user_data = <<-EOT
@@ -272,7 +335,7 @@ module "self_mng_windows_custom_template" {
pre_bootstrap_user_data = <<-EOT
[string]$Something = 'IDoNotKnowAnyPowerShell ¯\_(ツ)_/¯'
EOT
# I don't know if this is the right way on WindowsOS, but its just a string check here anyways
# I don't know if this is the right way on Windows, but its just a string check here anyways
bootstrap_extra_args = "-KubeletExtraArgs --node-labels=node.kubernetes.io/lifecycle=spot"
post_bootstrap_user_data = <<-EOT

View File

@@ -40,6 +40,27 @@ output "eks_mng_bottlerocket_custom_template" {
value = base64decode(module.eks_mng_bottlerocket_custom_template.user_data)
}
# EKS managed node group - windows
output "eks_mng_windows_no_op" {
description = "Base64 decoded user data rendered for the provided inputs"
value = base64decode(module.eks_mng_windows_no_op.user_data)
}
output "eks_mng_windows_additional" {
description = "Base64 decoded user data rendered for the provided inputs"
value = base64decode(module.eks_mng_windows_additional.user_data)
}
output "eks_mng_windows_custom_ami" {
description = "Base64 decoded user data rendered for the provided inputs"
value = base64decode(module.eks_mng_windows_custom_ami.user_data)
}
output "eks_mng_windows_custom_template" {
description = "Base64 decoded user data rendered for the provided inputs"
value = base64decode(module.eks_mng_windows_custom_template.user_data)
}
# Self managed node group - linux
output "self_mng_linux_no_op" {
description = "Base64 decoded user data rendered for the provided inputs"

View File

@@ -1,3 +1,3 @@
terraform {
required_version = ">= 1.0"
required_version = ">= 1.3"
}