Karpenter Example
Configuration in this directory creates an AWS EKS cluster with Karpenter provisioned for managing compute resource scaling. In the example provided, Karpenter is provisioned on top of an EKS Managed Node Group.
Usage
To run this example you need to execute:
$ terraform init
$ terraform plan
$ terraform apply
Once the cluster is up and running, you can check that Karpenter is functioning as intended with the following command:
# First, make sure you have updated your local kubeconfig
aws eks --region eu-west-1 update-kubeconfig --name ex-karpenter
# Second, scale the example deployment
kubectl scale deployment inflate --replicas 5
# You can watch Karpenter's controller logs with
kubectl logs -f -n kube-system -l app.kubernetes.io/name=karpenter -c controller
Validate if the Amazon EKS Addons Pods are running in the Managed Node Group and the inflate application Pods are running on Karpenter provisioned Nodes.
kubectl get nodes -L karpenter.sh/registered
NAME STATUS ROLES AGE VERSION REGISTERED
ip-10-0-16-155.eu-west-1.compute.internal Ready <none> 100s v1.29.3-eks-ae9a62a true
ip-10-0-3-23.eu-west-1.compute.internal Ready <none> 6m1s v1.29.3-eks-ae9a62a
ip-10-0-41-2.eu-west-1.compute.internal Ready <none> 6m3s v1.29.3-eks-ae9a62a
kubectl get pods -A -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME NODE
inflate-75d744d4c6-nqwz8 ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-nrqnn ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-sp4dx ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-xqzd9 ip-10-0-16-155.eu-west-1.compute.internal
inflate-75d744d4c6-xr6p5 ip-10-0-16-155.eu-west-1.compute.internal
aws-node-mnn7r ip-10-0-3-23.eu-west-1.compute.internal
aws-node-rkmvm ip-10-0-16-155.eu-west-1.compute.internal
aws-node-s4slh ip-10-0-41-2.eu-west-1.compute.internal
coredns-68bd859788-7rcfq ip-10-0-3-23.eu-west-1.compute.internal
coredns-68bd859788-l78hw ip-10-0-41-2.eu-west-1.compute.internal
eks-pod-identity-agent-gbx8l ip-10-0-41-2.eu-west-1.compute.internal
eks-pod-identity-agent-s7vt7 ip-10-0-16-155.eu-west-1.compute.internal
eks-pod-identity-agent-xwgqw ip-10-0-3-23.eu-west-1.compute.internal
karpenter-79f59bdfdc-9q5ff ip-10-0-41-2.eu-west-1.compute.internal
karpenter-79f59bdfdc-cxvhr ip-10-0-3-23.eu-west-1.compute.internal
kube-proxy-7crbl ip-10-0-41-2.eu-west-1.compute.internal
kube-proxy-jtzds ip-10-0-16-155.eu-west-1.compute.internal
kube-proxy-sm42c ip-10-0-3-23.eu-west-1.compute.internal
Tear Down & Clean-Up
Because Karpenter manages the state of node resources outside of Terraform, Karpenter created resources will need to be de-provisioned first before removing the remaining resources with Terraform.
- Remove the example deployment created above and any nodes created by Karpenter
kubectl delete deployment inflate
kubectl delete node -l karpenter.sh/provisioner-name=default
- Remove the resources created by Terraform
terraform destroy
Note that this example may create resources which cost money. Run terraform destroy when you don't need these resources.
Requirements
| Name | Version |
|---|---|
| terraform | >= 1.3.2 |
| aws | >= 5.40 |
| helm | >= 2.7 |
| kubectl | >= 2.0 |
Providers
| Name | Version |
|---|---|
| aws | >= 5.40 |
| aws.virginia | >= 5.40 |
| helm | >= 2.7 |
| kubectl | >= 2.0 |
Modules
| Name | Source | Version |
|---|---|---|
| eks | ../.. | n/a |
| karpenter | ../../modules/karpenter | n/a |
| karpenter_disabled | ../../modules/karpenter | n/a |
| vpc | terraform-aws-modules/vpc/aws | ~> 5.0 |
Resources
| Name | Type |
|---|---|
| helm_release.karpenter | resource |
| kubectl_manifest.karpenter_example_deployment | resource |
| kubectl_manifest.karpenter_node_class | resource |
| kubectl_manifest.karpenter_node_pool | resource |
| aws_availability_zones.available | data source |
| aws_ecrpublic_authorization_token.token | data source |
Inputs
No inputs.
Outputs
| Name | Description |
|---|---|
| access_entries | Map of access entries created and their attributes |
| cloudwatch_log_group_arn | Arn of cloudwatch log group created |
| cloudwatch_log_group_name | Name of cloudwatch log group created |
| cluster_addons | Map of attribute maps for all EKS cluster addons enabled |
| cluster_arn | The Amazon Resource Name (ARN) of the cluster |
| cluster_certificate_authority_data | Base64 encoded certificate data required to communicate with the cluster |
| cluster_endpoint | Endpoint for your Kubernetes API server |
| cluster_iam_role_arn | IAM role ARN of the EKS cluster |
| cluster_iam_role_name | IAM role name of the EKS cluster |
| cluster_iam_role_unique_id | Stable and unique string identifying the IAM role |
| cluster_id | The ID of the EKS cluster. Note: currently a value is returned only for local EKS clusters created on Outposts |
| cluster_identity_providers | Map of attribute maps for all EKS identity providers enabled |
| cluster_ip_family | The IP family used by the cluster (e.g. ipv4 or ipv6) |
| cluster_name | The name of the EKS cluster |
| cluster_oidc_issuer_url | The URL on the EKS cluster for the OpenID Connect identity provider |
| cluster_platform_version | Platform version for the cluster |
| cluster_primary_security_group_id | Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console |
| cluster_security_group_arn | Amazon Resource Name (ARN) of the cluster security group |
| cluster_security_group_id | ID of the cluster security group |
| cluster_service_cidr | The CIDR block where Kubernetes pod and service IP addresses are assigned from |
| cluster_status | Status of the EKS cluster. One of CREATING, ACTIVE, DELETING, FAILED |
| cluster_tls_certificate_sha1_fingerprint | The SHA1 fingerprint of the public key of the cluster's certificate |
| eks_managed_node_groups | Map of attribute maps for all EKS managed node groups created |
| eks_managed_node_groups_autoscaling_group_names | List of the autoscaling group names created by EKS managed node groups |
| fargate_profiles | Map of attribute maps for all EKS Fargate Profiles created |
| karpenter_event_rules | Map of the event rules created and their attributes |
| karpenter_iam_role_arn | The Amazon Resource Name (ARN) specifying the controller IAM role |
| karpenter_iam_role_name | The name of the controller IAM role |
| karpenter_iam_role_unique_id | Stable and unique string identifying the controller IAM role |
| karpenter_instance_profile_arn | ARN assigned by AWS to the instance profile |
| karpenter_instance_profile_id | Instance profile's ID |
| karpenter_instance_profile_name | Name of the instance profile |
| karpenter_instance_profile_unique | Stable and unique string identifying the IAM instance profile |
| karpenter_node_iam_role_arn | The Amazon Resource Name (ARN) specifying the IAM role |
| karpenter_node_iam_role_name | The name of the IAM role |
| karpenter_node_iam_role_unique_id | Stable and unique string identifying the IAM role |
| karpenter_queue_arn | The ARN of the SQS queue |
| karpenter_queue_name | The name of the created Amazon SQS queue |
| karpenter_queue_url | The URL for the created Amazon SQS queue |
| node_security_group_arn | Amazon Resource Name (ARN) of the node shared security group |
| node_security_group_id | ID of the node shared security group |
| oidc_provider | The OpenID Connect identity provider (issuer URL without leading https://) |
| oidc_provider_arn | The ARN of the OIDC Provider if enable_irsa = true |
| self_managed_node_groups | Map of attribute maps for all self managed node groups created |
| self_managed_node_groups_autoscaling_group_names | List of the autoscaling group names created by self-managed node groups |