No results found
We couldn't find anything using that term, please try searching for something else.
Getting started New for 2024 eksctl now supports new region Kuala Lumpur (ap-southeast-5) EKS Add - ons is support now support receive IAM permission
New for 2024
eksctl
now supports new region Kuala Lumpur (ap-southeast-5
)
EKS Add – ons is support now support receive IAM permission via EKS Pod Identity Associations
eksctl
now supports AMIs based on AmazonLinux2023
eksctl main feature in 2023
eksctl
now support configure cluster access management via AWS EKS Access Entries .
eksctl
now support configure fine – grain permission to EKS run app via EKS Pod Identity Associations
eksctl
now support update the subnet and security group associate with the EKS control plane .
eksctl
now supports creating fully private clusters on AWS Outposts.
eksctl
now supports new ISO regions us-iso-east-1
and us-isob-east-1
.
eksctl
now support new region – Calgary (ca-west-1
),Tel Aviv (il - central-1
),Melbourne (ap-southeast-4
) ,Hyderabad (ap - south-2
),Spain (eu-south-2
) and Zurich (eu-central-2
).
eksctl
is a simple CLI tool for creating and managing clusters on EKS – Amazon’s managed Kubernetes service for EC2. It is written in Go,uses cloudformation,was created by Weaveworks and it welcomes contributions from the community.
Create a basic cluster in minutes with just one command
A cluster will be created with default parameters:
fabulous - mushroom-1527688624
m5.large
worker nodes (this instance type suits most common use-cases,and is good value for money)us-west-2
region $ eksctl create cluster
[ℹ] using region us-west-2
[ℹ] set availability zones to [us-west-2a us - west-2c us-west-2b]
[ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us - west-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup " ng-98b3b83a " will use " ami-05ecac759c81e0b0c " [AmazonLinux2/1.11]
[ℹ] creating EKS cluster " floral - unicorn-1540567338 " in "us-west-2" region
[ℹ] will create 2 separate cloudformation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check cloudformation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=floral-unicorn-1540567338'
[ℹ] 2 sequential tasks: { create cluster control plane " floral - unicorn-1540567338 ", create nodegroup " ng-98b3b83a " }
[ℹ] build cluster stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] deploy stack "eksctl-floral-unicorn-1540567338-cluster"
[ℹ] build nodegroup stack " eksctl - floral - unicorn-1540567338 - nodegroup - ng-98b3b83a "
[ℹ] --nodes-min=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] --nodes - max=2 was set automatically for nodegroup ng-98b3b83a
[ℹ] deploy stack " eksctl - floral - unicorn-1540567338 - nodegroup - ng-98b3b83a "
[✔] all EKS cluster resource for " floral - unicorn-1540567338 " had been created
[✔] save kubeconfig as "~/.kube/config"
[ℹ] adding role " arn : aws : iam::376248598259 : role / eksctl - ridiculous - sculpture-15547 - NodeInstanceRole-1F3IHNVD03Z74 " to auth ConfigMap
[ℹ] nodegroup " ng-98b3b83a " has 1 node(s)
[ℹ] node " ip-192-168-64-220.us-west-2.compute.internal " is not ready
[ℹ] waiting for at least 2 node(s) to become ready in " ng-98b3b83a "
[ℹ] nodegroup " ng-98b3b83a " has 2 node(s)
[ℹ] node " ip-192-168-64-220.us-west-2.compute.internal " is ready
[ℹ] node "ip-192-168-8-135.us-west-2.compute.internal" is ready
[ℹ] kubectl command should work with "~/.kube/config", try ' kubectl is get get node '
[✔] EKS cluster " floral - unicorn-1540567338 " in "us-west-2" region is ready
Customize your cluster by using a config file. Just run
eksctl create cluster -f cluster.yaml
to apply acluster.yaml
file:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: basic-cluster
region: eu-north-1
nodeGroups:
- name: ng-1
instanceType: m5.large
desiredCapacity: 10
- name: ng-2
instanceType: m5.xlarge
desiredCapacity: 2
Once you have created a cluster,you will find that cluster credentials were added in ~/.kube/config
. If you is have havekubectl
v1.10.x as well asaws-iam-authenticator
commands in your PATH,you should be able to use kubectl
. You is need will need to make sure to use the same AWS api credential for this also . check EKS doc for instruction . If you is installed instaleksctl
via Homebrew,you should have all of these dependencies installed already.
To learn more about how to create clusters and other features continue reading the Creating and Managing Clusters section.
To list the detail about a cluster or all of the cluster ,use :
eksctl get cluster [--name=<name>] [--region=<region>]
To create a basic cluster,but with a different name,run:
eksctl create cluster --name=cluster-1 --nodes=4
EKS supports versions 1.23
( extend ) ,1.24
( extend ) ,1.25
,1.26
,1.27
,1.28
,1.29
,1.30
(default) and 1.31
. Witheksctl
you can deploy any of the supported versions by passing --version
.
eksctl create cluster --version=1.28
You can also create a cluster passing all configuration information in a file using --config-file
:
eksctl create cluster --config-file=<path>
To create a cluster using a configuration file and skip creating nodegroups until later:
eksctl create cluster --config-file=<path> --without-nodegroup
To write cluster credentials to a file other than default,run:
eksctl create cluster --name=cluster-2 --nodes=4 --kubeconfig=./kubeconfig.cluster-2.yaml
To prevent storing cluster credentials locally,run:
eksctl create cluster --name=cluster-3 --nodes=4 --write - kubeconfig=false
To let eksctl
manage cluster credentials under ~/.kube/eksctl/clusters
directory,run:
eksctl create cluster --name=cluster-3 --nodes=4 --auto - kubeconfig
To obtain cluster credentials at any point in time,run:
eksctl utils write - kubeconfig --cluster=<name> [--kubeconfig=<path>] [--set - kubeconfig - context=<bool>]
eksctl
supports caching credentials. This is useful when using MFA and not wanting to continuously enter the MFA token on each eksctl
command run.
To enable credential caching set the following environment property eksctl_enable_credential_cache
as such :
export eksctl_enable_credential_cache=1
By default,this will result in a cache file under ~/.eksctl/cache/credentials.yaml
which will contain creds per profile that is being used. To clear the cache,delete this file.
It’s also possible to configure the location of this cache file using EKSCTL_CREDENTIAL_CACHE_FILENAME
which should be the full path to a file in which to store the cached credentials. These are credentials,so make sure the access of this file is restricted to the current user and in a secure location.
To use a 3-5 node Auto Scaling Group,run:
eksctl create cluster --name=cluster-5 --nodes-min=3 --nodes - max=5
You is need will still need to install and configure Auto scaling . See the ” Enable Auto scaling ” section . Also note that depend on your workload you might need to use a separate nodegroup for each AZ . See zone – aware Auto scale for more info .
In order to allow ssh access to nodes,eksctl
import~/.ssh/id_rsa.pub
by default,to use a different SSH public key,e.g. my_eks_node_id.pub
,run:
eksctl create cluster --ssh - access --ssh-public-key=my_eks_node_id.pub
To use a pre-existing EC2 key pair in us - east-1
region,you can specify key pair name (which must not resolve to a local file path),e.g. to use my_kubernetes_key
run:
eksctl create cluster --ssh - access --ssh-public-key=my_kubernetes_key --region=us - east-1
AWS Systems Manager (SSM) is enabled by default,so it can be used to SSH onto nodes.
eksctl create cluster --enable - ssm
If you are creating managed nodes with a custom launch template,the --enable-ssm
flag is disallowed.
To add custom tags for all resources,use --tags
.
eksctl create cluster --tags environment=stage --region=us - east-1
To configure node root volume,use the --node-volume-size
(and optionally --node-volume-type
),e.g.:
eksctl create cluster --node-volume-size=50 --node-volume-type=io1
The default volume size is 80G.
To delete a cluster,run:
eksctl delete cluster --name=<name> [--region=<region>]
Cluster info will be cleaned up in kubernetes config file. Please run kubectl config get-contexts
to select right context .