docs: Move examples that are more like test cases to the new tests/ directory; add better example configurations (#3069)

* chore: Move examples that are more like test cases to the new `tests/` directory

* chore: Stash

* feat: Add better examples for EKS managed node groups

* chore: Add better examples for self-managed node groups

* chore: Update docs and correct `nodegroup` to `node group`
This commit is contained in:
Bryant Biggs
2024-06-13 10:51:40 -04:00
committed by GitHub
parent 73b752a1e3
commit 323fb759d7
85 changed files with 509 additions and 109 deletions

View File

@@ -1,6 +1,6 @@
# AWS EKS Terraform module # AWS EKS Terraform module
Terraform module which creates AWS EKS (Kubernetes) resources Terraform module which creates Amazon EKS (Kubernetes) resources
[![SWUbanner](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md) [![SWUbanner](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner2-direct.svg)](https://github.com/vshymanskyy/StandWithUkraine/blob/main/docs/README.md)
@@ -23,13 +23,6 @@ Please note that we strive to provide a comprehensive suite of documentation for
- [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) - [AWS EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
- [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [Kubernetes Documentation](https://kubernetes.io/docs/home/)
#### Reference Architecture
The examples provided under `examples/` provide a comprehensive suite of configurations that demonstrate nearly all of the possible different configurations and settings that can be used with this module. However, these examples are not representative of clusters that you would normally find in use for production workloads. For reference architectures that utilize this module, please see the following:
- [EKS Reference Architecture](https://github.com/clowdhaus/eks-reference-architecture)
- [EKS Blueprints](https://github.com/aws-ia/terraform-aws-eks-blueprints)
## Usage ## Usage
```hcl ```hcl
@@ -38,20 +31,15 @@ module "eks" {
version = "~> 20.0" version = "~> 20.0"
cluster_name = "my-cluster" cluster_name = "my-cluster"
cluster_version = "1.29" cluster_version = "1.30"
cluster_endpoint_public_access = true cluster_endpoint_public_access = true
cluster_addons = { cluster_addons = {
coredns = { coredns = {}
most_recent = true eks-pod-identity-agent = {}
} kube-proxy = {}
kube-proxy = { vpc-cni = {}
most_recent = true
}
vpc-cni = {
most_recent = true
}
} }
vpc_id = "vpc-1234556abcdef" vpc_id = "vpc-1234556abcdef"
@@ -65,12 +53,13 @@ module "eks" {
eks_managed_node_groups = { eks_managed_node_groups = {
example = { example = {
min_size = 1 # Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
max_size = 10 ami_type = "AL2023_x86_64_STANDARD"
desired_size = 1 instance_types = ["m5.xlarge"]
instance_types = ["t3.large"] min_size = 2
capacity_type = "SPOT" max_size = 10
desired_size = 2
} }
} }
@@ -169,12 +158,10 @@ module "eks" {
## Examples ## Examples
- [EKS Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks_managed_node_group): EKS Cluster using EKS managed node groups - [EKS Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/eks-managed-node-group): EKS Cluster using EKS managed node groups
- [Fargate Profile](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/fargate_profile): EKS cluster using [Fargate Profiles](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html)
- [Karpenter](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/karpenter): EKS Cluster with [Karpenter](https://karpenter.sh/) provisioned for intelligent data plane management - [Karpenter](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/karpenter): EKS Cluster with [Karpenter](https://karpenter.sh/) provisioned for intelligent data plane management
- [Outposts](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/outposts): EKS local cluster provisioned on [AWS Outposts](https://docs.aws.amazon.com/eks/latest/userguide/eks-outposts.html) - [Outposts](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/outposts): EKS local cluster provisioned on [AWS Outposts](https://docs.aws.amazon.com/eks/latest/userguide/eks-outposts.html)
- [Self Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self_managed_node_group): EKS Cluster using self-managed node groups - [Self Managed Node Group](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/self-managed-node-group): EKS Cluster using self-managed node groups
- [User Data](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/user_data): Various supported methods of providing necessary bootstrap scripts and configuration settings via user data
## Contributing ## Contributing

View File

@@ -1,8 +1,5 @@
# Examples # Examples
Please note - the examples provided serve two primary means: The examples provided demonstrate different cluster configurations that users can create with the modules provided.
1. Show users working examples of the various ways in which the module can be configured and features supported
2. A means of testing/validating module changes
Please do not mistake the examples provided as "best practices". It is up to users to consult the AWS service documentation for best practices, usage recommendations, etc. Please do not mistake the examples provided as "best practices". It is up to users to consult the AWS service documentation for best practices, usage recommendations, etc.

View File

@@ -0,0 +1,23 @@
# EKS Managed Node Group Examples
Configuration in this directory creates Amazon EKS clusters with EKS Managed Node Groups demonstrating different configurations:
- `eks-al2.tf` demonstrates an EKS cluster using EKS managed node group that utilizes the EKS Amazon Linux 2 optimized AMI
- `eks-al2023.tf` demonstrates an EKS cluster using EKS managed node group that utilizes the EKS Amazon Linux 2023 optimized AMI
- `eks-bottlerocket.tf` demonstrates an EKS cluster using EKS managed node group that utilizes the Bottlerocket EKS optimized AMI
See the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) for additional details on Amazon EKS managed node groups.
The different cluster configuration examples provided are separated per file and independent of the other cluster configurations.
## Usage
To provision the provided configurations you need to execute:
```bash
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -0,0 +1,34 @@
module "eks_al2" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-al2"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
example = {
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
ami_type = "AL2_x86_64"
instance_types = ["m6i.large"]
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
}
}
tags = local.tags
}

View File

@@ -0,0 +1,52 @@
module "eks_al2023" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-al2023"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
example = {
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
instance_types = ["m6i.large"]
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
# This is not required - demonstrates how to pass additional configuration to nodeadm
# Ref https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/
cloudinit_pre_nodeadm = [
{
content_type = "application/node.eks.aws"
content = <<-EOT
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
kubelet:
config:
shutdownGracePeriod: 30s
featureGates:
DisableKubeletCloudCredentialProviders: true
EOT
}
]
}
}
tags = local.tags
}

View File

@@ -0,0 +1,52 @@
module "eks_bottlerocket" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-bottlerocket"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
example = {
ami_type = "BOTTLEROCKET_x86_64"
instance_types = ["m6i.large"]
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
# This is not required - demonstrates how to pass additional configuration
# Ref https://bottlerocket.dev/en/os/1.19.x/api/settings/
bootstrap_extra_args = <<-EOT
# The admin host container provides SSH access and runs with "superpowers".
# It is disabled by default, but can be disabled explicitly.
[settings.host-containers.admin]
enabled = false
# The control host container provides out-of-band access via SSM.
# It is enabled by default, and can be disabled if you do not expect to use SSM.
# This could leave you with no way to access the API and change settings on an existing node!
[settings.host-containers.control]
enabled = true
# extra args added
[settings.kernel]
lockdown = "integrity"
EOT
}
}
tags = local.tags
}

View File

@@ -0,0 +1,49 @@
provider "aws" {
region = local.region
}
data "aws_availability_zones" "available" {}
locals {
name = "ex-eks-mng"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
################################################################################
# VPC
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}

View File

@@ -4,12 +4,12 @@ Configuration in this directory creates an AWS EKS cluster with [Karpenter](http
## Usage ## Usage
To run this example you need to execute: To provision the provided configurations you need to execute:
```bash ```bash
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
Once the cluster is up and running, you can check that Karpenter is functioning as intended with the following command: Once the cluster is up and running, you can check that Karpenter is functioning as intended with the following command:
@@ -78,7 +78,7 @@ kubectl delete node -l karpenter.sh/provisioner-name=default
2. Remove the resources created by Terraform 2. Remove the resources created by Terraform
```bash ```bash
terraform destroy terraform destroy --auto-approve
``` ```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -62,7 +62,7 @@ module "eks" {
source = "../.." source = "../.."
cluster_name = local.name cluster_name = local.name
cluster_version = "1.29" cluster_version = "1.30"
# Gives Terraform identity admin access to cluster which will # Gives Terraform identity admin access to cluster which will
# allow deploying resources (Karpenter) into the cluster # allow deploying resources (Karpenter) into the cluster
@@ -82,6 +82,7 @@ module "eks" {
eks_managed_node_groups = { eks_managed_node_groups = {
karpenter = { karpenter = {
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["m5.large"] instance_types = ["m5.large"]
min_size = 2 min_size = 2
@@ -146,7 +147,7 @@ resource "helm_release" "karpenter" {
repository_username = data.aws_ecrpublic_authorization_token.token.user_name repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter" chart = "karpenter"
version = "0.36.1" version = "0.37.0"
wait = false wait = false
values = [ values = [
@@ -168,7 +169,7 @@ resource "kubectl_manifest" "karpenter_node_class" {
metadata: metadata:
name: default name: default
spec: spec:
amiFamily: AL2 amiFamily: AL2023
role: ${module.karpenter.node_iam_role_name} role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms: subnetSelectorTerms:
- tags: - tags:

View File

@@ -1,4 +1,4 @@
# EKS on Outposts # EKS on Outposts Example
Configuration in this directory creates an AWS EKS local cluster on AWS Outposts Configuration in this directory creates an AWS EKS local cluster on AWS Outposts
@@ -16,7 +16,7 @@ To run this example you need to:
$ cd prerequisites $ cd prerequisites
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
2. If provisioning using the remote host deployed in step 1, connect to the remote host using SSM. Note, you will need to have the [SSM plugin for the AWS CLI installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html). You can use the output generated by step 1 to connect: 2. If provisioning using the remote host deployed in step 1, connect to the remote host using SSM. Note, you will need to have the [SSM plugin for the AWS CLI installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html). You can use the output generated by step 1 to connect:
@@ -31,13 +31,13 @@ $ aws ssm start-session --region <REGION> --target <INSTANCE_ID>
$ cd $HOME/terraform-aws-eks/examples/outposts $ cd $HOME/terraform-aws-eks/examples/outposts
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.
```bash ```bash
terraform destroy terraform destroy --auto-approve
``` ```
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --> <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

View File

@@ -4,7 +4,7 @@ provider "aws" {
locals { locals {
name = "ex-${basename(path.cwd)}" name = "ex-${basename(path.cwd)}"
cluster_version = "1.29" cluster_version = "1.30"
outpost_arn = element(tolist(data.aws_outposts_outposts.this.arns), 0) outpost_arn = element(tolist(data.aws_outposts_outposts.this.arns), 0)
instance_type = element(tolist(data.aws_outposts_outpost_instance_types.this.instance_types), 0) instance_type = element(tolist(data.aws_outposts_outpost_instance_types.this.instance_types), 0)

View File

@@ -56,7 +56,7 @@ module "ssm_bastion_ec2" {
rm terraform_${local.terraform_version}_linux_amd64.zip 2> /dev/null rm terraform_${local.terraform_version}_linux_amd64.zip 2> /dev/null
# Install kubectl # Install kubectl
curl -LO https://dl.k8s.io/release/v1.29.0/bin/linux/amd64/kubectl curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Remove default awscli which is v1 - we want latest v2 # Remove default awscli which is v1 - we want latest v2

View File

@@ -0,0 +1,21 @@
# Self-managed Node Group Examples
Configuration in this directory creates Amazon EKS clusters with self-managed node groups demonstrating different configurations:
- `eks-al2.tf` demonstrates an EKS cluster using self-managed node group that utilizes the EKS Amazon Linux 2 optimized AMI
- `eks-al2023.tf` demonstrates an EKS cluster using self-managed node group that utilizes the EKS Amazon Linux 2023 optimized AMI
- `eks-bottlerocket.tf` demonstrates an EKS cluster using self-managed node group that utilizes the Bottlerocket EKS optimized AMI
The different cluster configuration examples provided are separated per file and independent of the other cluster configurations.
## Usage
To provision the provided configurations you need to execute:
```bash
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -0,0 +1,33 @@
module "eks_al2" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-al2"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
self_managed_node_groups = {
example = {
ami_type = "AL2_x86_64"
instance_type = "m6i.large"
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
}
}
tags = local.tags
}

View File

@@ -0,0 +1,52 @@
module "eks_al2023" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-al2023"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
self_managed_node_groups = {
example = {
ami_type = "AL2023_x86_64_STANDARD"
instance_type = "m6i.large"
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
# This is not required - demonstrates how to pass additional configuration to nodeadm
# Ref https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/
cloudinit_pre_nodeadm = [
{
content_type = "application/node.eks.aws"
content = <<-EOT
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
kubelet:
config:
shutdownGracePeriod: 30s
featureGates:
DisableKubeletCloudCredentialProviders: true
EOT
}
]
}
}
tags = local.tags
}

View File

@@ -0,0 +1,52 @@
module "eks_bottlerocket" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "${local.name}-bottlerocket"
cluster_version = "1.30"
# EKS Addons
cluster_addons = {
coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
self_managed_node_groups = {
example = {
ami_type = "BOTTLEROCKET_x86_64"
instance_type = "m6i.large"
min_size = 2
max_size = 5
# This value is ignored after the initial creation
# https://github.com/bryantbiggs/eks-desired-size-hack
desired_size = 2
# This is not required - demonstrates how to pass additional configuration
# Ref https://bottlerocket.dev/en/os/1.19.x/api/settings/
bootstrap_extra_args = <<-EOT
# The admin host container provides SSH access and runs with "superpowers".
# It is disabled by default, but can be disabled explicitly.
[settings.host-containers.admin]
enabled = false
# The control host container provides out-of-band access via SSM.
# It is enabled by default, and can be disabled if you do not expect to use SSM.
# This could leave you with no way to access the API and change settings on an existing node!
[settings.host-containers.control]
enabled = true
# extra args added
[settings.kernel]
lockdown = "integrity"
EOT
}
}
tags = local.tags
}

View File

@@ -0,0 +1,49 @@
provider "aws" {
region = local.region
}
data "aws_availability_zones" "available" {}
locals {
name = "ex-self-mng"
region = "eu-west-1"
vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}
################################################################################
# VPC
################################################################################
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}

View File

@@ -1,25 +1,13 @@
# EKS Managed Node Group Example # EKS Managed Node Group
Configuration in this directory creates an AWS EKS cluster with various EKS Managed Node Groups demonstrating the various methods of configuring/customizing:
- A default, "out of the box" EKS managed node group as supplied by AWS EKS
- A default, "out of the box" Bottlerocket EKS managed node group as supplied by AWS EKS
- A Bottlerocket EKS managed node group that supplies additional bootstrap settings
- A Bottlerocket EKS managed node group that demonstrates many of the configuration/customizations offered by the `eks-managed-node-group` sub-module for the Bottlerocket OS
- An EKS managed node group created from a launch template created outside of the module
- An EKS managed node group that utilizes a custom AMI that is an EKS optimized AMI derivative
- An EKS managed node group that demonstrates nearly all of the configurations/customizations offered by the `eks-managed-node-group` sub-module
See the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) for further details.
## Usage ## Usage
To run this example you need to execute: To provision the provided configurations you need to execute:
```bash ```bash
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -14,7 +14,7 @@ locals {
azs = slice(data.aws_availability_zones.available.names, 0, 3) azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = { tags = {
Example = local.name Test = local.name
GithubRepo = "terraform-aws-eks" GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules" GithubOrg = "terraform-aws-modules"
} }

View File

@@ -1,15 +1,13 @@
# AWS EKS Cluster with Fargate profiles # Fargate Profile
Configuration in this directory creates an AWS EKS cluster utilizing Fargate profiles.
## Usage ## Usage
To run this example you need to execute: To provision the provided configurations you need to execute:
```bash ```bash
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -5,15 +5,15 @@ provider "aws" {
data "aws_availability_zones" "available" {} data "aws_availability_zones" "available" {}
locals { locals {
name = "ex-${replace(basename(path.cwd), "_", "-")}" name = "ex-${basename(path.cwd)}"
cluster_version = "1.29" cluster_version = "1.30"
region = "eu-west-1" region = "eu-west-1"
vpc_cidr = "10.0.0.0/16" vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3) azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = { tags = {
Example = local.name Test = local.name
GithubRepo = "terraform-aws-eks" GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules" GithubOrg = "terraform-aws-modules"
} }

View File

@@ -0,0 +1,10 @@
terraform {
required_version = ">= 1.3.2"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.40"
}
}
}

View File

@@ -1,21 +1,13 @@
# Self Managed Node Groups Example # Self-managed Node Group
Configuration in this directory creates an AWS EKS cluster with various Self Managed Node Groups (AutoScaling Groups) demonstrating the various methods of configuring/customizing:
- A default, "out of the box" self managed node group as supplied by the `self-managed-node-group` sub-module
- A Bottlerocket self managed node group that demonstrates many of the configuration/customizations offered by the `self-manged-node-group` sub-module for the Bottlerocket OS
- A self managed node group that demonstrates nearly all of the configurations/customizations offered by the `self-managed-node-group` sub-module
See the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) for further details.
## Usage ## Usage
To run this example you need to execute: To provision the provided configurations you need to execute:
```bash ```bash
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources. Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

View File

@@ -14,7 +14,7 @@ locals {
azs = slice(data.aws_availability_zones.available.names, 0, 3) azs = slice(data.aws_availability_zones.available.names, 0, 3)
tags = { tags = {
Example = local.name Test = local.name
GithubRepo = "terraform-aws-eks" GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules" GithubOrg = "terraform-aws-modules"
} }

View File

@@ -0,0 +1,10 @@
terraform {
required_version = ">= 1.3.2"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.40"
}
}
}

View File

@@ -4,12 +4,12 @@ Configuration in this directory render various user data outputs used for testin
## Usage ## Usage
To run this example you need to execute: To provision the provided configurations you need to execute:
```bash ```bash
$ terraform init $ terraform init
$ terraform plan $ terraform plan
$ terraform apply $ terraform apply --auto-approve
``` ```
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --> <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

View File