Post

Bootstrap EKS Auto Mode Cluster with ACK and kro using Kind

Use a Kind cluster with AWS Controllers for Kubernetes (ACK) and Kubernetes Resource Orchestrator (kro) to bootstrap a self-managed EKS cluster

Bootstrap EKS Auto Mode Cluster with ACK and kro using Kind

This post demonstrates how to use a temporary Kind cluster with AWS Controllers for Kubernetes (ACK) and Kubernetes Resource Orchestrator (kro) to bootstrap a EKS Auto Mode Cluster that manages itself. The process involves creating AWS resources including an S3 bucket and an EKS Auto Mode Cluster using native Kubernetes APIs, backing up those resources with Velero, and restoring them to the new EKS Auto Mode Cluster — effectively making it self-managed.

Requirements

Architecture Overview

The bootstrap process follows these steps:

  1. Deploy Kind cluster locally
  2. Install kro and ACK controllers on Kind cluster
  3. Use ACK + kro to provision EKS Auto Mode Cluster and S3 bucket
  4. Install Velero on Kind cluster and backup kro + ACK resources to S3 bucket
  5. Install kro, ACK controllers, and Velero on EKS Auto Mode Cluster
  6. Restore kro and ACK resources to EKS Auto Mode Cluster
  7. Delete Kind cluster - EKS Auto Mode Cluster now manages itself
flowchart TD
    subgraph Local["Local Machine"]
        A[1. Deploy Kind Cluster]
    end

    subgraph Kind["Kind Cluster"]
        B[2. Install kro + ACK]
        C[3. Provision EKS Cluster + S3 Bucket]
        D[4. Install Velero + Backup Resources]
    end

    subgraph AWS["AWS Cloud"]
        S3[(S3 Bucket)]
        subgraph EKS["EKS Auto Mode Cluster"]
            E[5. Install kro + ACK + Velero]
            F[6. Restore Resources]
        end
    end

    A --> B
    B --> C
    C --> S3
    C --> EKS
    D -->|Backup| S3
    E --> F
    S3 -->|Restore| F

    style Local fill:#326ce5,stroke:#fff,color:#fff
    style Kind fill:#326ce5,stroke:#fff,color:#fff
    style AWS fill:#ff9900,stroke:#fff,color:#fff
    style EKS fill:#ff9900,stroke:#fff,color:#fff

ACK provides Kubernetes CRDs for AWS services, while kro orchestrates complex resource dependencies, creating a powerful infrastructure management platform.

Prerequisites

You will need to configure the AWS CLI and set up other necessary secrets and variables:

1
2
3
4
5
# AWS Credentials
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SESSION_TOKEN="xxxxxxxx"
export AWS_ROLE_TO_ASSUME="arn:aws:iam::7xxxxxxxxxx7:role/Gixxxxxxxxxxxxxxxxxxxxle"

If you plan to follow this document and its tasks, you will need to set up a few environment variables, such as:

1
2
3
4
5
6
7
8
9
10
11
12
# AWS Region
export AWS_DEFAULT_REGION="${AWS_DEFAULT_REGION:-us-east-1}"
# Hostname / FQDN definitions
export CLUSTER_FQDN="k02.k8s.mylabs.dev"
# Cluster Name: k02
export CLUSTER_NAME="${CLUSTER_FQDN%%.*}"
export MY_EMAIL="petr.ruzicka@gmail.com"
export TMP_DIR="${TMP_DIR:-${PWD}/tmp}"
# Tags used to tag the AWS resources
export TAGS="${TAGS:-Owner=${MY_EMAIL},Environment=dev,Cluster=${CLUSTER_FQDN}}"
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) && export AWS_ACCOUNT_ID
mkdir -pv "${TMP_DIR}"/{"${CLUSTER_FQDN}","kind-${CLUSTER_NAME}-bootstrap"}

Bootstrap Kind Cluster and Provision EKS Auto Mode with ACK and kro

This section covers creating a temporary Kind cluster, installing kro and ACK controllers, defining ResourceGraphDefinitions, and using them to provision the EKS Auto Mode Cluster along with all supporting AWS resources.

Create Kind Cluster

Kind logo

Create the Kind cluster:

1
2
kind create cluster --name "kind-${CLUSTER_NAME}-bootstrap" --kubeconfig "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kubeconfig-kind-${CLUSTER_NAME}-bootstrap.yaml"
export KUBECONFIG="${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kubeconfig-kind-${CLUSTER_NAME}-bootstrap.yaml"

Install kro on Kind Cluster

Install kro using Helm:

1
2
3
# renovate: datasource=docker depName=registry.k8s.io/kro/charts/kro
KRO_HELM_CHART_VERSION="0.8.5"
helm upgrade --install --version=${KRO_HELM_CHART_VERSION} --namespace kro-system --create-namespace kro oci://registry.k8s.io/kro/charts/kro

Install ACK Controllers on Kind Cluster

ACK logo

Create namespace and configure AWS credentials for ACK:

1
2
3
4
5
6
7
8
kubectl create namespace ack-system
set +x
kubectl -n ack-system create secret generic aws-credentials --from-literal=credentials="[default]
aws_access_key_id=${AWS_ACCESS_KEY_ID}
aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
aws_session_token=${AWS_SESSION_TOKEN}
aws_role_to_assume=${AWS_ROLE_TO_ASSUME}"
set -x

Install ACK controllers (S3, IAM, EKS, EC2, KMS, CloudWatch Logs):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# renovate: datasource=github-tags depName=aws-controllers-k8s/ack-chart
ACK_HELM_CHART_VERSION="46.75.1"

cat > "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/helm_values-ack.yml" << EOF
eks:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
ec2:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
iam:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
kms:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
cloudwatchlogs:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
s3:
  enabled: true
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
EOF
helm upgrade --install --version=${ACK_HELM_CHART_VERSION} --namespace ack-system --values "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/helm_values-ack.yml" ack oci://public.ecr.aws/aws-controllers-k8s/ack-chart

Create EKS Auto Mode Cluster with ACK and kro

Create an EKS Auto Mode Cluster using kro ResourceGraphDefinitions. This approach uses ResourceGraphDefinitions for the EKS Auto Mode Cluster itself.

Add KMS Key ResourceGraphDefinition

Define a KMS key used for encrypting EKS Auto Mode Cluster and S3 data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-kmskey-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: kmskey
spec:
  schema:
    apiVersion: v1alpha1
    kind: KmsKey
    spec:
      name: string
      accountId: string
      region: string | default="us-east-1"
    status:
      keyARN: ${kmsKey.status.ackResourceMetadata.arn}
      keyID: ${kmsKey.status.keyID}
  resources:
  - id: kmsKey
    template:
      apiVersion: kms.services.k8s.aws/v1alpha1
      kind: Key
      metadata:
        name: "${schema.spec.name}-kms-key"
      spec:
        description: "KMS key for ${schema.spec.name} EKS Auto Mode Cluster"
        enableKeyRotation: true
        policy: |
          {
            "Version": "2012-10-17",
            "Id": "eks-key-policy-${schema.spec.name}",
            "Statement": [
              {
                "Sid": "Allow full access to the account root",
                "Effect": "Allow",
                "Principal": {
                  "AWS": "arn:aws:iam::${schema.spec.accountId}:root"
                },
                "Action": "kms:*",
                "Resource": "*"
              },
              {
                "Sid": "Allow AWS services to use the key",
                "Effect": "Allow",
                "Principal": {
                  "Service": [
                    "eks.amazonaws.com",
                    "logs.${schema.spec.region}.amazonaws.com"
                  ]
                },
                "Action": [
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncrypt*",
                  "kms:GenerateDataKey*",
                  "kms:CreateGrant",
                  "kms:DescribeKey"
                ],
                "Resource": "*",
                "Condition": {
                  "StringEquals": {
                    "aws:SourceAccount": "${schema.spec.accountId}"
                  }
                }
              },
              {
                "Sid": "Allow S3 access for Velero and EKS Auto Mode Cluster node volumes",
                "Effect": "Allow",
                "Principal": {
                  "AWS": "*"
                },
                "Action": [
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncrypt*",
                  "kms:GenerateDataKey*",
                  "kms:CreateGrant",
                  "kms:DescribeKey"
                ],
                "Resource": "*",
                "Condition": {
                  "StringEquals": {
                    "kms:ViaService": [
                      "s3.${schema.spec.region}.amazonaws.com",
                      "ec2.${schema.spec.region}.amazonaws.com"
                    ],
                    "kms:CallerAccount": "${schema.spec.accountId}"
                  }
                }
              }
            ]
          }
  - id: kmsKeyAlias
    template:
      apiVersion: kms.services.k8s.aws/v1alpha1
      kind: Alias
      metadata:
        name: "${schema.spec.name}-kms-alias"
      spec:
        name: "alias/${schema.spec.name}-eks-auto-mode-cluster"
        targetKeyID: ${kmsKey.status.keyID}
EOF

Create S3 Bucket with ACK and kro

First, create an RGD that defines how to create an S3 bucket with proper policies:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-s3bucket-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: s3-velero-bucket
spec:
  schema:
    apiVersion: v1alpha1
    kind: S3Bucket
    spec:
      bucketName: string
      region: string
      kmsKeyARN: string
      tags:
        owner: "string | default=\"\""
        environment: "string | default=\"dev\""
        cluster: "string | default=\"\""
    status:
      bucketARN: ${s3bucket.status.ackResourceMetadata.arn}
      bucketName: ${s3bucket.status.ackResourceMetadata.arn}
  resources:
    - id: s3bucket
      template:
        apiVersion: s3.services.k8s.aws/v1alpha1
        kind: Bucket
        metadata:
          name: ${schema.spec.bucketName}
        spec:
          name: ${schema.spec.bucketName}
          publicAccessBlock:
            blockPublicACLs: true
            blockPublicPolicy: true
            ignorePublicACLs: true
            restrictPublicBuckets: true
          encryption:
            rules:
              - applyServerSideEncryptionByDefault:
                  sseAlgorithm: aws:kms
                  kmsMasterKeyID: ${schema.spec.kmsKeyARN}
          tagging:
            tagSet:
              - key: "Name"
                value: "${schema.spec.bucketName}"
              - key: "Owner"
                value: "${schema.spec.tags.owner}"
              - key: "Environment"
                value: "${schema.spec.tags.environment}"
              - key: "Cluster"
                value: "${schema.spec.tags.cluster}"
          policy: |
            {
              "Version": "2012-10-17",
              "Statement": [
                {
                  "Sid": "ForceSSLOnlyAccess",
                  "Effect": "Deny",
                  "Principal": "*",
                  "Action": "s3:*",
                  "Resource": [
                    "arn:aws:s3:::${schema.spec.bucketName}",
                    "arn:aws:s3:::${schema.spec.bucketName}/*"
                  ],
                  "Condition": {
                    "Bool": {
                      "aws:SecureTransport": "false"
                    }
                  }
                }
              ]
            }
EOF

Add CloudWatch LogGroup ResourceGraphDefinition

Create a CloudWatch LogGroup for EKS Auto Mode Cluster logs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-ekscloudwatchloggroup-loggroup-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: ekscloudwatchloggroup
spec:
  schema:
    apiVersion: v1alpha1
    kind: EksCloudWatchLogGroup
    spec:
      name: string
      retentionDays: "integer | default=1"
      kmsKeyARN: "string | default=\"\""
      tags:
        owner: "string | default=\"\""
        environment: "string | default=\"dev\""
        cluster: "string | default=\"\""
    status:
      logGroupName: ${cloudWatchLogGroup.spec.name}
  resources:
  - id: cloudWatchLogGroup
    template:
      apiVersion: cloudwatchlogs.services.k8s.aws/v1alpha1
      kind: LogGroup
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-logs"
      spec:
        name: "/aws/eks/${schema.spec.name}/cluster"
        retentionDays: ${schema.spec.retentionDays}
        kmsKeyID: ${schema.spec.kmsKeyARN}
        tags:
          Name: "/aws/eks/${schema.spec.name}/cluster"
          Owner: "${schema.spec.tags.owner}"
          Environment: "${schema.spec.tags.environment}"
          Cluster: "${schema.spec.tags.cluster}"
EOF

Add VPC ResourceGraphDefinition

Create a VPC with networking resources for EKS Auto Mode Cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-eksvpc-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: eksvpc
spec:
  schema:
    apiVersion: v1alpha1
    kind: EksVpc
    spec:
      name: string
      region: string | default="us-east-1"
      tags:
        owner: "string | default=\"\""
        environment: "string | default=\"dev\""
        cluster: "string | default=\"\""
      cidr:
        vpcCidr: "string | default=\"192.168.0.0/16\""
        publicSubnet1Cidr: "string | default=\"192.168.0.0/19\""
        publicSubnet2Cidr: "string | default=\"192.168.32.0/19\""
        privateSubnet1Cidr: "string | default=\"192.168.64.0/19\""
        privateSubnet2Cidr: "string | default=\"192.168.96.0/19\""
    status:
      vpcID: ${vpc.status.vpcID}
      publicSubnet1ID: ${publicSubnet1.status.subnetID}
      publicSubnet2ID: ${publicSubnet2.status.subnetID}
      privateSubnet1ID: ${privateSubnet1.status.subnetID}
      privateSubnet2ID: ${privateSubnet2.status.subnetID}
  resources:
  - id: vpc
    readyWhen:
      - ${vpc.status.state == "available"}
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: VPC
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-vpc"
      spec:
        cidrBlocks:
          - ${schema.spec.cidr.vpcCidr}
        enableDNSSupport: true
        enableDNSHostnames: true
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-vpc"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  - id: eip
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: ElasticIPAddress
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-eip"
      spec:
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-eip"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  - id: internetGateway
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: InternetGateway
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-igw"
      spec:
        vpc: ${vpc.status.vpcID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-igw"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  - id: natGateway
    readyWhen:
      - '${natGateway.status.state == "available"}'
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: NATGateway
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-nat-gateway"
      spec:
        subnetID: ${publicSubnet1.status.subnetID}
        allocationID: ${eip.status.allocationID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-nat-gateway"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  - id: publicRoutetable
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: RouteTable
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-public-routetable"
      spec:
        vpcID: ${vpc.status.vpcID}
        routes:
        - destinationCIDRBlock: 0.0.0.0/0
          gatewayID: ${internetGateway.status.internetGatewayID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-public-routetable"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  - id: privateRoutetable
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: RouteTable
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-private-routetable"
      spec:
        vpcID: ${vpc.status.vpcID}
        routes:
        - destinationCIDRBlock: 0.0.0.0/0
          natGatewayID: ${natGateway.status.natGatewayID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-private-routetable"
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  # Public Subnet 1 (us-east-1a)
  - id: publicSubnet1
    readyWhen:
      - ${publicSubnet1.status.state == "available"}
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: Subnet
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-public-subnet1-${schema.spec.region}a"
      spec:
        availabilityZone: ${schema.spec.region}a
        cidrBlock: ${schema.spec.cidr.publicSubnet1Cidr}
        mapPublicIPOnLaunch: true
        vpcID: ${vpc.status.vpcID}
        routeTables:
        - ${publicRoutetable.status.routeTableID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-public-subnet1-${schema.spec.region}a"
          - key: kubernetes.io/role/elb
            value: '1'
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  # Public Subnet 2 (us-east-1b)
  - id: publicSubnet2
    readyWhen:
      - ${publicSubnet2.status.state == "available"}
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: Subnet
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-public-subnet2-${schema.spec.region}b"
      spec:
        availabilityZone: ${schema.spec.region}b
        cidrBlock: ${schema.spec.cidr.publicSubnet2Cidr}
        mapPublicIPOnLaunch: true
        vpcID: ${vpc.status.vpcID}
        routeTables:
        - ${publicRoutetable.status.routeTableID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-public-subnet2-${schema.spec.region}b"
          - key: kubernetes.io/role/elb
            value: '1'
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  # Private Subnet 1 (us-east-1a)
  - id: privateSubnet1
    readyWhen:
      - ${privateSubnet1.status.state == "available"}
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: Subnet
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-private-subnet1-${schema.spec.region}a"
      spec:
        availabilityZone: ${schema.spec.region}a
        cidrBlock: ${schema.spec.cidr.privateSubnet1Cidr}
        vpcID: ${vpc.status.vpcID}
        routeTables:
        - ${privateRoutetable.status.routeTableID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-private-subnet1-${schema.spec.region}a"
          - key: kubernetes.io/role/internal-elb
            value: '1'
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
  # Private Subnet 2 (us-east-1b)
  - id: privateSubnet2
    readyWhen:
      - ${privateSubnet2.status.state == "available"}
    template:
      apiVersion: ec2.services.k8s.aws/v1alpha1
      kind: Subnet
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-private-subnet2-${schema.spec.region}b"
      spec:
        availabilityZone: ${schema.spec.region}b
        cidrBlock: ${schema.spec.cidr.privateSubnet2Cidr}
        vpcID: ${vpc.status.vpcID}
        routeTables:
        - ${privateRoutetable.status.routeTableID}
        tags:
          - key: "Name"
            value: "${schema.spec.name}-eks-auto-mode-cluster-private-subnet2-${schema.spec.region}b"
          - key: kubernetes.io/role/internal-elb
            value: '1'
          - key: "Owner"
            value: "${schema.spec.tags.owner}"
          - key: "Environment"
            value: "${schema.spec.tags.environment}"
          - key: "Cluster"
            value: "${schema.spec.tags.cluster}"
EOF

Add Pod Identity Associations ResourceGraphDefinition

Create a RGD for Pod Identity Associations that sets up Velero and ACK controller permissions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-podidentityassociations-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: podidentityassociations
spec:
  schema:
    apiVersion: v1alpha1
    kind: PodIdentityAssociations
    spec:
      name: string
      clusterName: string
      accountId: string
      s3BucketName: string
      tags:
        owner: "string | default=\"\""
        environment: "string | default=\"dev\""
        cluster: "string | default=\"\""
  resources:
  - id: veleroPolicy
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Policy
      metadata:
        name: ${schema.spec.name}-velero-policy
      spec:
        name: ${schema.spec.name}-velero-policy
        description: "Velero S3 backup and snapshot permissions"
        policyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Sid": "S3ObjectAccess",
                "Effect": "Allow",
                "Action": [
                  "s3:GetObject",
                  "s3:DeleteObject",
                  "s3:PutObject",
                  "s3:AbortMultipartUpload",
                  "s3:ListMultipartUploadParts"
                ],
                "Resource": "arn:aws:s3:::${schema.spec.s3BucketName}/*"
              },
              {
                "Sid": "S3BucketAccess",
                "Effect": "Allow",
                "Action": [
                  "s3:ListBucket"
                ],
                "Resource": "arn:aws:s3:::${schema.spec.s3BucketName}"
              }
            ]
          }
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: veleroRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-velero-velero"
      spec:
        name: "${schema.spec.name}-velero-velero"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - ${veleroPolicy.status.ackResourceMetadata.arn}
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: veleroPodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-velero-velero"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: velero
        serviceAccount: velero-server
        roleARN: ${veleroRole.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackCloudwatchlogsRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-cloudwatchlogs-controller"
      spec:
        name: "${schema.spec.name}-ack-cloudwatchlogs-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - "arn:aws:iam::aws:policy/CloudWatchFullAccessV2"
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackCloudwatchlogsPodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-cloudwatchlogs-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-cloudwatchlogs-controller
        roleARN: ${ackCloudwatchlogsRole.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackEc2Role
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-ec2-controller"
      spec:
        name: "${schema.spec.name}-ack-ec2-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - "arn:aws:iam::aws:policy/AmazonEC2FullAccess"
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackEc2PodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-ec2-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-ec2-controller
        roleARN: ${ackEc2Role.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackEksRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-eks-controller"
      spec:
        name: "${schema.spec.name}-ack-eks-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        inlinePolicies:
          eks-controller-policy: |
            {
              "Version": "2012-10-17",
              "Statement": [
                {
                  "Effect": "Allow",
                  "Action": [
                    "eks:*",
                    "iam:GetRole",
                    "iam:PassRole",
                    "iam:ListAttachedRolePolicies",
                    "ec2:DescribeSubnets",
                    "kms:DescribeKey",
                    "kms:CreateGrant"
                  ],
                  "Resource": "*"
                }
              ]
            }
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackEksPodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-eks-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-eks-controller
        roleARN: ${ackEksRole.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackIamRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-iam-controller"
      spec:
        name: "${schema.spec.name}-ack-iam-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - "arn:aws:iam::aws:policy/IAMFullAccess"
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackIamPodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-iam-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-iam-controller
        roleARN: ${ackIamRole.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackKmsRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-kms-controller"
      spec:
        name: "${schema.spec.name}-ack-kms-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        inlinePolicies:
          kms-controller-policy: |
            {
              "Version": "2012-10-17",
              "Statement": [
                {
                  "Effect": "Allow",
                  "Action": "kms:*",
                  "Resource": "*"
                }
              ]
            }
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackKmsPodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-kms-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-kms-controller
        roleARN: ${ackKmsRole.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: ackS3Role
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ack-s3-controller"
      spec:
        name: "${schema.spec.name}-ack-s3-controller"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "pods.eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - "arn:aws:iam::aws:policy/AmazonS3FullAccess"
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: ackS3PodIdentityAssociation
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: PodIdentityAssociation
      metadata:
        name: "${schema.spec.name}-ack-system-ack-s3-controller"
      spec:
        clusterName: ${schema.spec.clusterName}
        namespace: ack-system
        serviceAccount: ack-s3-controller
        roleARN: ${ackS3Role.status.ackResourceMetadata.arn}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
EOF

Add EKS Auto Mode Cluster ResourceGraphDefinition

Create the EKS Auto Mode Cluster RGD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-eks-auto-mode-cluster-rgd.yaml" << 'EOF' | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: ResourceGraphDefinition
metadata:
  name: eks-auto-mode-cluster
spec:
  schema:
    apiVersion: v1alpha1
    kind: EksAutoModeCluster
    spec:
      name: string
      region: string | default="us-east-1"
      k8sVersion: "string | default=\"\""
      accountId: string
      adminRoleARN: string
      tags:
        owner: "string | default=\"\""
        environment: "string | default=\"dev\""
        cluster: "string | default=\"\""
      s3BucketName: string
      vpcConfig:
        endpointPrivateAccess: "boolean | default=true"
        endpointPublicAccess: "boolean | default=true"
      nodeGroupConfig:
        desiredSize: "integer | default=2"
        minSize: "integer | default=1"
        maxSize: "integer | default=3"
        instanceType: "string | default=\"t4g.medium\""
        volumeSize: "integer | default=20"
      cidr:
        vpcCidr: "string | default=\"192.168.0.0/16\""
        publicSubnet1Cidr: "string | default=\"192.168.0.0/19\""
        publicSubnet2Cidr: "string | default=\"192.168.32.0/19\""
        privateSubnet1Cidr: "string | default=\"192.168.64.0/19\""
        privateSubnet2Cidr: "string | default=\"192.168.96.0/19\""
    status:
      clusterARN: ${cluster.status.ackResourceMetadata.arn}
      clusterStatus: ${cluster.status.status}
      vpcID: ${eksVpc.status.vpcID}
      privateSubnet1ID: ${eksVpc.status.privateSubnet1ID}
      privateSubnet2ID: ${eksVpc.status.privateSubnet2ID}
      kmsKeyARN: ${kmsKey.status.keyARN}
      s3BucketARN: ${s3Bucket.status.bucketARN}
  resources:
  - id: kmsKey
    template:
      apiVersion: kro.run/v1alpha1
      kind: KmsKey
      metadata:
        name: "${schema.spec.name}-kms"
      spec:
        name: "${schema.spec.name}"
        accountId: "${schema.spec.accountId}"
        region: ${schema.spec.region}
  - id: s3Bucket
    template:
      apiVersion: kro.run/v1alpha1
      kind: S3Bucket
      metadata:
        name: "${schema.spec.name}-s3"
      spec:
        bucketName: ${schema.spec.s3BucketName}
        region: ${schema.spec.region}
        kmsKeyARN: ${kmsKey.status.keyARN}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: cloudWatchLogGroup
    template:
      apiVersion: kro.run/v1alpha1
      kind: EksCloudWatchLogGroup
      metadata:
        name: "${schema.spec.name}-logs"
      spec:
        name: "${schema.spec.name}"
        kmsKeyARN: ${kmsKey.status.keyARN}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: eksVpc
    template:
      apiVersion: kro.run/v1alpha1
      kind: EksVpc
      metadata:
        name: "${schema.spec.name}-vpc"
      spec:
        name: "${schema.spec.name}"
        region: ${schema.spec.region}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
        cidr:
          vpcCidr: ${schema.spec.cidr.vpcCidr}
          publicSubnet1Cidr: ${schema.spec.cidr.publicSubnet1Cidr}
          publicSubnet2Cidr: ${schema.spec.cidr.publicSubnet2Cidr}
          privateSubnet1Cidr: ${schema.spec.cidr.privateSubnet1Cidr}
          privateSubnet2Cidr: ${schema.spec.cidr.privateSubnet2Cidr}
  - id: clusterRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-eks-auto-mode-cluster-role"
      spec:
        name: "${schema.spec.name}-eks-auto-mode-cluster-role"
        description: "EKS Auto Mode Cluster IAM role"
        policies:
          - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
          - arn:aws:iam::aws:policy/AmazonEKSComputePolicy
          - arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy
          - arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy
          - arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "eks.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: nodeRole
    template:
      apiVersion: iam.services.k8s.aws/v1alpha1
      kind: Role
      metadata:
        name: "${schema.spec.name}-ng-role"
      spec:
        name: "${schema.spec.name}-nodegroup-${schema.spec.name}-ng-NodeRole"
        assumeRolePolicyDocument: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": [
                  "sts:AssumeRole",
                  "sts:TagSession"
                ]
              }
            ]
          }
        policies:
          - "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
          - "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
        tags:
          - key: owner
            value: "${schema.spec.tags.owner}"
          - key: environment
            value: "${schema.spec.tags.environment}"
          - key: cluster
            value: "${schema.spec.tags.cluster}"
  - id: cluster
    readyWhen:
      - '${cluster.status.status == "ACTIVE"}'
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: Cluster
      metadata:
        name: ${schema.spec.name}
        annotations:
          clusterRoleArn: ${clusterRole.status.ackResourceMetadata.arn}
      spec:
        name: ${schema.spec.name}
        roleARN: ${clusterRole.status.ackResourceMetadata.arn}
        version: ${schema.spec.k8sVersion}
        accessConfig:
          authenticationMode: API_AND_CONFIG_MAP
          bootstrapClusterCreatorAdminPermissions: true
        computeConfig:
          enabled: true
          nodeRoleARN: ${nodeRole.status.ackResourceMetadata.arn}
          nodePools:
            - system
            - general-purpose
        kubernetesNetworkConfig:
          ipFamily: ipv4
          elasticLoadBalancing:
            enabled: true
        logging:
          clusterLogging:
            - enabled: true
              types:
                - api
                - audit
                - authenticator
                - controllerManager
                - scheduler
        storageConfig:
          blockStorage:
            enabled: true
        resourcesVPCConfig:
          endpointPrivateAccess: ${schema.spec.vpcConfig.endpointPrivateAccess}
          endpointPublicAccess: ${schema.spec.vpcConfig.endpointPublicAccess}
          subnetIDs:
            - ${eksVpc.status.privateSubnet1ID}
            - ${eksVpc.status.privateSubnet2ID}
        encryptionConfig:
          - provider:
              keyARN: ${kmsKey.status.keyARN}
            resources:
              - secrets
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
  - id: addonPodIdentity
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: Addon
      metadata:
        name: ${schema.spec.name}-pod-identity
        annotations:
          cluster-arn: ${cluster.status.ackResourceMetadata.arn}
      spec:
        name: eks-pod-identity-agent
        clusterName: ${cluster.spec.name}
  - id: accessEntry
    template:
      apiVersion: eks.services.k8s.aws/v1alpha1
      kind: AccessEntry
      metadata:
        name: ${schema.spec.name}-admin-access
        # Reference cluster.status to ensure kro waits for cluster to be ACTIVE
        annotations:
          cluster-arn: ${cluster.status.ackResourceMetadata.arn}
      spec:
        clusterName: ${cluster.spec.name}
        principalARN: ${schema.spec.adminRoleARN}
        type: STANDARD
        accessPolicies:
          - policyARN: "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
            accessScope:
              type: cluster
  - id: podIdentityAssociations
    template:
      apiVersion: kro.run/v1alpha1
      kind: PodIdentityAssociations
      metadata:
        name: "${schema.spec.name}-pod-identity-associations"
      spec:
        name: ${schema.spec.name}
        clusterName: ${schema.spec.name}
        accountId: "${schema.spec.accountId}"
        s3BucketName: ${schema.spec.s3BucketName}
        tags:
          owner: "${schema.spec.tags.owner}"
          environment: "${schema.spec.tags.environment}"
          cluster: "${schema.spec.tags.cluster}"
EOF
kubectl wait --for=jsonpath='{.status.state}'=Active resourcegraphdefinition/eks-auto-mode-cluster -n kro-system --timeout=5m

Create EKS Auto Mode Cluster Instance

Now create a single instance that provisions the EKS cluster using the expanded combined RGD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/kro-eks-auto-mode-cluster.yaml" << EOF | kubectl apply -f -
apiVersion: kro.run/v1alpha1
kind: EksAutoModeCluster
metadata:
  name: ${CLUSTER_NAME}
  namespace: kro-system
spec:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  accountId: "${AWS_ACCOUNT_ID}"
  adminRoleARN: "${AWS_ROLE_TO_ASSUME%/*}/admin"
  s3BucketName: ${CLUSTER_FQDN}
  tags:
    owner: ${MY_EMAIL}
    environment: dev
    cluster: ${CLUSTER_FQDN}
EOF
kubectl wait --for=condition=Ready "eksautomodecluster/${CLUSTER_NAME}" -n kro-system --timeout=30m

Install Velero

velero

Install the velero Helm chart and modify its default values:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# renovate: datasource=helm depName=velero registryUrl=https://vmware-tanzu.github.io/helm-charts
VELERO_HELM_CHART_VERSION="11.3.2"

helm repo add --force-update vmware-tanzu https://vmware-tanzu.github.io/helm-charts
cat > "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/helm_values-velero.yml" << EOF
initContainers:
  - name: velero-plugin-for-aws
    # renovate: datasource=docker depName=velero/velero-plugin-for-aws extractVersion=^(?<version>.+)$
    image: velero/velero-plugin-for-aws:v1.13.2
    volumeMounts:
      - mountPath: /target
        name: plugins
upgradeCRDs: false
configuration:
  backupStorageLocation:
    - name: default
      provider: aws
      bucket: ${CLUSTER_FQDN}
      prefix: velero
      config:
        region: ${AWS_DEFAULT_REGION}
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id=${AWS_ACCESS_KEY_ID}
      aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
      aws_session_token=${AWS_SESSION_TOKEN}
snapshotsEnabled: false
EOF
helm upgrade --install --version "${VELERO_HELM_CHART_VERSION}" --namespace velero --create-namespace --wait --values "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/helm_values-velero.yml" velero vmware-tanzu/velero

Create a Velero backup for kro and ACK resources. Use resource filtering with API group wildcards to capture kro.run objects (cluster-scoped RGDs and namespaced instances) and services.k8s.aws objects (ACK-managed AWS resources), all scoped to the kro-system namespace:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
tee "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap/velero-kro-ack-backup.yaml" << EOF | kubectl apply -f -
apiVersion: velero.io/v1
kind: Backup
metadata:
  name: kro-ack-backup
  namespace: velero
spec:
  # Include kro-system namespace where kro instances are created
  includedNamespaces:
    - kro-system
  # Include cluster-scoped kro resources (ResourceGraphDefinitions)
  includedClusterScopedResources:
    - "*.kro.run"
  # Include namespaced kro instances and ACK resources (with status)
  includedNamespaceScopedResources:
    - "*.kro.run"
    - "*.services.k8s.aws"
EOF

Migrate Bootstrap Resources to EKS Auto Mode Cluster

EKS logo

At this point the Kind cluster has done its job: the EKS Auto Mode Cluster is running in AWS, the S3 bucket exists, and a Velero backup of all kro and ACK resources is stored in S3. The remaining steps switch context to the new EKS cluster and make it self-managing:

  1. Configure kubectl access to the EKS Auto Mode Cluster
  2. Install kro, ACK controllers, and Velero on the EKS cluster (all with zero replicas to prevent premature reconciliation)
  3. Restore the Velero backup so that kro and ACK resources appear with their existing AWS resource ARNs intact
  4. Scale controllers back up — they adopt existing AWS resources instead of creating duplicates
  5. Delete the Kind bootstrap cluster

Configure Access to EKS Auto Mode Cluster

Update kubeconfig for the new EKS Auto Mode cluster:

1
2
export KUBECONFIG="${TMP_DIR}/${CLUSTER_FQDN}/kubeconfig-${CLUSTER_NAME}.conf"
aws eks update-kubeconfig --region "${AWS_DEFAULT_REGION}" --name "${CLUSTER_NAME}" --kubeconfig "${KUBECONFIG}"

Install kro on EKS Auto Mode Cluster

Install kro on the EKS Auto Mode Cluster with zero replicas — the same approach used for ACK below. kro’s CRDs are registered but the controller does not reconcile until after the Velero restore completes:

1
2
3
# renovate: datasource=docker depName=registry.k8s.io/kro/charts/kro
KRO_HELM_CHART_VERSION="0.8.5"
helm upgrade --install --namespace kro-system --create-namespace --set deployment.replicaCount=0 --version=${KRO_HELM_CHART_VERSION} kro oci://registry.k8s.io/kro/charts/kro

Install ACK Controllers on EKS Auto Mode Cluster

Install ACK controllers with deployment.replicas: 0 so the controllers install their CRDs but do not start reconciling. This prevents a race condition during the Velero restore: Velero restores CRs in two steps (create without status, then patch /status). If ACK controllers are running during the create step, they see a CR with no ARN in .status.ackResourceMetadata and attempt to create new AWS resources - duplicating ones that already exist. Deploying with zero replicas eliminates this window; the controllers are scaled back up after the restore completes and all status fields are in place:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# renovate: datasource=github-tags depName=aws-controllers-k8s/ack-chart
ACK_HELM_CHART_VERSION="46.75.1"

cat > "${TMP_DIR}/${CLUSTER_FQDN}/helm_values-ack.yml" << EOF
eks:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
ec2:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
iam:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
kms:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
cloudwatchlogs:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
s3:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
EOF
helm upgrade --install --version=${ACK_HELM_CHART_VERSION} --namespace ack-system --create-namespace --values "${TMP_DIR}/${CLUSTER_FQDN}/helm_values-ack.yml" ack oci://public.ecr.aws/aws-controllers-k8s/ack-chart

Install Velero on EKS Auto Mode Cluster

Install the velero Helm chart and modify its default values:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# renovate: datasource=helm depName=velero registryUrl=https://vmware-tanzu.github.io/helm-charts
VELERO_HELM_CHART_VERSION="11.3.2"

helm repo add --force-update vmware-tanzu https://vmware-tanzu.github.io/helm-charts
cat > "${TMP_DIR}/${CLUSTER_FQDN}/helm_values-velero.yml" << EOF
initContainers:
  - name: velero-plugin-for-aws
    # renovate: datasource=docker depName=velero/velero-plugin-for-aws extractVersion=^(?<version>.+)$
    image: velero/velero-plugin-for-aws:v1.13.2
    volumeMounts:
      - mountPath: /target
        name: plugins
upgradeCRDs: false
configuration:
  backupStorageLocation:
    - name: default
      provider: aws
      bucket: ${CLUSTER_FQDN}
      prefix: velero
      config:
        region: ${AWS_DEFAULT_REGION}
snapshotsEnabled: false
EOF
helm upgrade --install --version "${VELERO_HELM_CHART_VERSION}" --namespace velero --create-namespace --wait --values "${TMP_DIR}/${CLUSTER_FQDN}/helm_values-velero.yml" velero vmware-tanzu/velero

Wait for the kro-ack-backup to appear in the Velero backup list (synced from the S3 bucket):

1
2
3
4
while ! kubectl get backup -n velero kro-ack-backup 2> /dev/null; do
  echo "Waiting for kro-ack-backup to appear..."
  sleep 5
done

Restore kro and ACK Resources to EKS

Create restore from backup:

1
2
3
4
5
6
7
8
9
10
11
12
13
tee "${TMP_DIR}/${CLUSTER_FQDN}/velero-kro-ack-restore.yaml" << EOF | kubectl apply -f -
apiVersion: velero.io/v1
kind: Restore
metadata:
  name: kro-ack-restore
  namespace: velero
spec:
  backupName: kro-ack-backup
  restoreStatus:
    includedResources:
      - "*"
EOF
kubectl wait --for=jsonpath='{.status.phase}'=Completed restore/kro-ack-restore -n velero

Scale kro and ACK controllers back up. When the controllers start, every CR already has its ARN in .status.ackResourceMetadata, so they reconcile with existing AWS resources instead of creating duplicates:

1
2
3
4
kubectl scale deploy -n kro-system kro --replicas=1
for DEPLOY in $(kubectl get deploy -n ack-system -o name); do
  kubectl scale "${DEPLOY}" -n ack-system --replicas=1
done

Verify the restore. ACK resources have their .status fields intact (containing AWS resource IDs), and kro resources recognize their managed ACK resources:

1
2
3
4
5
kubectl get resourcegraphdefinition
for RESOURCE in $(kubectl api-resources --api-group kro.run --no-headers | awk '!/resourcegraphdefinition/{print $1}'); do
  echo -e "\n=== ${RESOURCE} ==="
  kubectl get "${RESOURCE}" -A
done
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
NAME                      APIVERSION   KIND                      STATE    AGE
eks-auto-mode-cluster     v1alpha1     EksAutoModeCluster        Active   9s
ekscloudwatchloggroup     v1alpha1     EksCloudWatchLogGroup     Active   9s
eksvpc                    v1alpha1     EksVpc                    Active   9s
kmskey                    v1alpha1     KmsKey                    Active   9s
podidentityassociations   v1alpha1     PodIdentityAssociations   Active   9s
s3-velero-bucket          v1alpha1     S3Bucket                  Active   9s

=== eksautomodeclusters ===
NAMESPACE    NAME   STATE    READY   AGE
kro-system   k02    ACTIVE   True    12s

=== ekscloudwatchloggroups ===
NAMESPACE    NAME       STATE    READY   AGE
kro-system   k02-logs   ACTIVE   True    13s

=== eksvpcs ===
NAMESPACE    NAME      STATE    READY   AGE
kro-system   k02-vpc   ACTIVE   True    14s

=== kmskeys ===
NAMESPACE    NAME      STATE    READY   AGE
kro-system   k02-kms   ACTIVE   True    14s

=== podidentityassociations ===
NAMESPACE    NAME                                           CLUSTER   NAMESPACE    SERVICEACCOUNT                  SYNCED   AGE
kro-system   k02-ack-system-ack-cloudwatchlogs-controller   k02       ack-system   ack-cloudwatchlogs-controller   True     15s
kro-system   k02-ack-system-ack-ec2-controller              k02       ack-system   ack-ec2-controller              True     14s
kro-system   k02-ack-system-ack-eks-controller              k02       ack-system   ack-eks-controller              True     14s
kro-system   k02-ack-system-ack-iam-controller              k02       ack-system   ack-iam-controller              True     14s
kro-system   k02-ack-system-ack-kms-controller              k02       ack-system   ack-kms-controller              True     14s
kro-system   k02-ack-system-ack-s3-controller               k02       ack-system   ack-s3-controller               True     14s
kro-system   k02-velero-velero                              k02       velero       velero-server                   True     14s

=== s3buckets ===
NAMESPACE    NAME     STATE    READY   AGE
kro-system   k02-s3   ACTIVE   True    14s

Delete the restore:

1
kubectl delete restore kro-ack-restore -n velero

The EKS Auto Mode cluster is now managing its own infrastructure through kro and ACK resources that were migrated from the Kind cluster.

Remove the bootstrap Kind cluster:

1
kind delete cluster --name "kind-${CLUSTER_NAME}-bootstrap"

Cleanup

Cleanup

Define environment variables and workspace paths for cleanup tasks:

1
2
3
4
5
export AWS_DEFAULT_REGION="${AWS_DEFAULT_REGION:-us-east-1}"
export CLUSTER_FQDN="k02.k8s.mylabs.dev"
export CLUSTER_NAME="${CLUSTER_FQDN%%.*}"
export TMP_DIR="${TMP_DIR:-${PWD}/tmp}"
mkdir -pv "${TMP_DIR}/${CLUSTER_FQDN}" "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup"

Create the Kind cluster:

1
2
kind create cluster --name "kind-${CLUSTER_NAME}-cleanup" --kubeconfig "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/kubeconfig-kind-${CLUSTER_NAME}-cleanup.yaml"
export KUBECONFIG="${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/kubeconfig-kind-${CLUSTER_NAME}-cleanup.yaml"

Install kro using Helm:

1
2
3
# renovate: datasource=docker depName=registry.k8s.io/kro/charts/kro
KRO_HELM_CHART_VERSION="0.8.5"
helm upgrade --install --version=${KRO_HELM_CHART_VERSION} --namespace kro-system --create-namespace --set deployment.replicas=0 kro oci://registry.k8s.io/kro/charts/kro

Create namespace and configure AWS credentials for ACK:

1
2
3
4
5
6
7
8
kubectl create namespace ack-system
set +x
kubectl -n ack-system create secret generic aws-credentials --from-literal=credentials="[default]
aws_access_key_id=${AWS_ACCESS_KEY_ID}
aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
aws_session_token=${AWS_SESSION_TOKEN}
aws_role_to_assume=${AWS_ROLE_TO_ASSUME}"
set -x

Install ACK controllers with deployment.replicas: 0 — CRDs are registered but controllers stay idle until the restore populates .status fields (same race-condition guard as the main cluster):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# renovate: datasource=github-tags depName=aws-controllers-k8s/ack-chart
ACK_HELM_CHART_VERSION="46.75.1"

cat > "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/helm_values-ack.yml" << EOF
eks:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
ec2:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
iam:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
kms:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
cloudwatchlogs:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
s3:
  enabled: true
  deployment:
    replicas: 0
  aws:
    region: ${AWS_DEFAULT_REGION}
    credentials:
      secretName: aws-credentials
EOF
helm upgrade --install --version=${ACK_HELM_CHART_VERSION} --namespace ack-system --values "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/helm_values-ack.yml" ack oci://public.ecr.aws/aws-controllers-k8s/ack-chart

Install the velero Helm chart and modify its default values:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# renovate: datasource=helm depName=velero registryUrl=https://vmware-tanzu.github.io/helm-charts
VELERO_HELM_CHART_VERSION="11.3.2"

helm repo add --force-update vmware-tanzu https://vmware-tanzu.github.io/helm-charts
cat > "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/helm_values-velero.yml" << EOF
initContainers:
  - name: velero-plugin-for-aws
    # renovate: datasource=docker depName=velero/velero-plugin-for-aws extractVersion=^(?<version>.+)$
    image: velero/velero-plugin-for-aws:v1.13.2
    volumeMounts:
      - mountPath: /target
        name: plugins
upgradeCRDs: false
configuration:
  backupStorageLocation:
    - name: default
      provider: aws
      bucket: ${CLUSTER_FQDN}
      prefix: velero
      config:
        region: ${AWS_DEFAULT_REGION}
credentials:
  useSecret: true
  secretContents:
    cloud: |
      [default]
      aws_access_key_id=${AWS_ACCESS_KEY_ID}
      aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
      aws_session_token=${AWS_SESSION_TOKEN}
snapshotsEnabled: false
EOF
helm upgrade --install --version "${VELERO_HELM_CHART_VERSION}" --namespace velero --create-namespace --wait --values "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup/helm_values-velero.yml" velero vmware-tanzu/velero

while ! kubectl get backup -n velero kro-ack-backup 2> /dev/null; do
  echo "Waiting for kro-ack-backup to appear..."
  sleep 5
done

Restore kro and ACK resources from the Velero backup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
tee "${TMP_DIR}/${CLUSTER_FQDN}/velero-kro-ack-restore.yaml" << EOF | kubectl apply -f -
apiVersion: velero.io/v1
kind: Restore
metadata:
  name: kro-ack-restore
  namespace: velero
spec:
  backupName: kro-ack-backup
  existingResourcePolicy: update
  restoreStatus:
    includedResources:
      - "*"
EOF
kubectl wait --for=jsonpath='{.status.phase}'=Completed restore/kro-ack-restore -n velero

Scale kro and ACK controllers back up so they can reconcile the restored resources:

1
2
3
4
kubectl scale deploy -n kro-system kro --replicas=1
for DEPLOY in $(kubectl get deploy -n ack-system -o name); do
  kubectl scale "${DEPLOY}" -n ack-system --replicas=1
done

Delete the Velero backup, remove the restore, and delete the EKS Auto Mode Cluster along with all kro-managed AWS resources:

1
2
3
4
5
6
7
8
9
10
11
kubectl apply -n velero -f - << EOF || true
apiVersion: velero.io/v1
kind: DeleteBackupRequest
metadata:
  name: kro-ack-backup-delete
  namespace: velero
spec:
  backupName: kro-ack-backup
EOF

kubectl delete restore kro-ack-restore -n velero

Delete the EKS Auto Mode Cluster kro instance and all its kro-managed AWS resources. First, patch the S3Bucket CR to remove its finalizer — this is needed because a field-ownership conflict between Velero’s restore and kro’s Server-Side Apply prevents kro from cleaning it up automatically, which would cause the delete to hang indefinitely:

1
2
3
4
5
6
7
# Workaround: after Velero restore, Server-Side Apply field ownership prevents
# KRO from removing its own finalizer from the S3Bucket CR. The finalizer is
# owned by Velero's field manager, so KRO's SSA patch silently fails to remove
# it, causing deletion to hang indefinitely.
kubectl patch s3buckets k02-s3 -n kro-system --type=json -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'

kubectl delete eksautomodeclusters.kro.run -n kro-system "${CLUSTER_NAME}" --timeout=10m || true

Delete all the kind clusters:

1
2
kind delete cluster --name "kind-${CLUSTER_NAME}-bootstrap"
kind delete cluster --name "kind-${CLUSTER_NAME}-cleanup"

Remove the ${TMP_DIR}/${CLUSTER_FQDN} directory:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
set +e
if [[ -d "${TMP_DIR}/${CLUSTER_FQDN}" ]]; then
  for FILE in \
    "${TMP_DIR}/${CLUSTER_FQDN}"/{helm_values-ack.yml,helm_values-velero.yml,kubeconfig-${CLUSTER_NAME}.conf,velero-kro-ack-restore.yaml} \
    "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap"/{helm_values-ack.yml,helm_values-velero.yml,kro-eks-auto-mode-cluster-rgd.yaml,kro-eks-auto-mode-cluster.yaml,kro-ekscloudwatchloggroup-loggroup-rgd.yaml,kro-eksvpc-rgd.yaml,kro-kmskey-rgd.yaml,kro-podidentityassociations-rgd.yaml,kro-s3bucket-rgd.yaml,"kubeconfig-kind-${CLUSTER_NAME}-bootstrap.yaml",velero-kro-ack-backup.yaml} \
    "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup"/{kubeconfig-kind-${CLUSTER_NAME}-cleanup.yaml,helm_values-ack.yml,helm_values-velero.yml}; do
    if [[ -f "${FILE}" ]]; then
      rm -v "${FILE}"
    else
      echo "❌ File not found: ${FILE}"
    fi
  done
  rmdir "${TMP_DIR}/${CLUSTER_FQDN}" "${TMP_DIR}/kind-${CLUSTER_NAME}-bootstrap" "${TMP_DIR}/kind-${CLUSTER_NAME}-cleanup"
fi
set -e

Enjoy your self-managed EKS cluster with ACK and kro… 😉

This post is licensed under CC BY 4.0 by the author.