Create k8s cluster

Before starting with the main content, it's necessary to provision the Kubernetes in AWS.

Use the MY_DOMAIN variable containing domain and LETSENCRYPT_ENVIRONMENT variable. The LETSENCRYPT_ENVIRONMENT variable should be one of:

  • staging - Let’s Encrypt will create testing certificate (not valid)

  • production - Let’s Encrypt will create valid certificate (use with care)

export MY_DOMAIN=${}

Prepare the local working environment


You can skip these steps if you have all the required software already installed.

Install necessary software:

if [ -x /usr/bin/apt ]; then
  apt update -qq
  DEBIAN_FRONTEND=noninteractive apt-get install -y -qq awscli curl git jq openssh-client sudo wget > /dev/null

Install kubectl binary:

if [ ! -x /usr/local/bin/kubectl ]; then
  sudo curl -s -Lo /usr/local/bin/kubectl$(curl -s
  sudo chmod a+x /usr/local/bin/kubectl

Install kops:

if [ ! -x /usr/local/bin/kops ]; then
  curl -LO$(curl -s | jq -r '.tag_name')/kops-linux-amd64
  chmod +x kops-linux-amd64
  sudo mv kops-linux-amd64 /usr/local/bin/kops

Install kn client for Knative:

if [ ! -x /usr/local/bin/kn ]; then
  sudo curl -s -L "" -o /usr/local/bin/kn
  sudo chmod a+x /usr/local/bin/kn

Install hub:

if [ ! -x /usr/local/bin/hub ]; then
  curl -s -L | tar xzf - -C /tmp/
  sudo mv /tmp/hub-linux-amd64-2.13.0/bin/hub /usr/local/bin/

Configure AWS

Authorize to AWS using AWS CLI:

aws configure

Create DNS zone:

aws route53 create-hosted-zone --name ${MY_DOMAIN} --caller-reference ${MY_DOMAIN}

Use your domain registrar to change the nameservers for your zone (for example to use the Amazon Route 53 nameservers. Here is the way how you can find out the the Route 53 nameservers:

aws route53 get-hosted-zone --id $(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${MY_DOMAIN}.\`].Id" --output text) --query "DelegationSet.NameServers"

Create policy allowing the cert-manager to change Route 53 settings. This will allow cert-manager to generate wildcard SSL certificates by Let's Encrypt certificate authority.

test -d tmp || mkdir tmp
envsubst < files/user_policy.json > tmp/user_policy.json

aws iam create-policy \
  --policy-name ${USER}-k8s-${MY_DOMAIN} \
  --description "Policy for ${USER}-k8s-${MY_DOMAIN}" \
  --policy-document file://tmp/user_policy.json \
| jq


Create user which will use the policy above:

aws iam create-user --user-name ${USER}-k8s-${MY_DOMAIN} | jq && \
POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName==\`${USER}-k8s-${MY_DOMAIN}\`].{ARN:Arn}" --output text) && \
aws iam attach-user-policy --user-name "${USER}-k8s-${MY_DOMAIN}" --policy-arn $POLICY_ARN && \
aws iam create-access-key --user-name ${USER}-k8s-${MY_DOMAIN} > $HOME/.aws/${USER}-k8s-${MY_DOMAIN} && \
export USER_AWS_ACCESS_KEY_ID=$(awk -F\" "/AccessKeyId/ { print \$4 }" $HOME/.aws/${USER}-k8s-${MY_DOMAIN}) && \
export USER_AWS_SECRET_ACCESS_KEY=$(awk -F\" "/SecretAccessKey/ { print \$4 }" $HOME/.aws/${USER}-k8s-${MY_DOMAIN})


The AccessKeyId and SecretAccessKey is need for creating the ClusterIssuer definition for cert-manager.

Create K8s in AWS


Generate SSH keys if not exists:

test -f $HOME/.ssh/id_rsa || ( install -m 0700 -d $HOME/.ssh && ssh-keygen -b 2048 -t rsa -f $HOME/.ssh/id_rsa -q -N "" )

Clone the k8s-knative-gitlab-harbor Git repository if it wasn't done already:

if [ ! -d .git ]; then
  git clone --quiet && cd k8s-knative-gitlab-harbor

Create S3 bucket where the kops will store cluster status:

aws s3api create-bucket --bucket ${USER}-kops-k8s --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1 | jq

Create Kubernetes cluster in AWS by using kops:

kops create cluster \
  --name=${USER}-k8s.${MY_DOMAIN} \
  --state=s3://${USER}-kops-k8s \
  --zones=eu-central-1a \
  --networking=amazon-vpc-routed-eni \
  --node-count=5 \
  --node-size=t3.large \
  --node-volume-size=20 \
  --master-count=1 \
  --master-size=t3.small \
  --master-volume-size=10 \
  --dns-zone=${MY_DOMAIN} \
  --cloud-labels "Owner=${USER},Environment=Test,Division=Services" \
  --ssh-public-key $HOME/.ssh/ \


Wait for cluster to be up and running:

sleep 200
while `kops validate cluster --state=s3://${USER}-kops-k8s -o yaml 2>&1 | grep -q failures`; do sleep 5; echo -n .; done

Store kubeconfig in current directory:

kops export kubecfg ${USER}-k8s.${MY_DOMAIN} --state=s3://${USER}-kops-k8s --kubeconfig kubeconfig.conf

Check if the new Kubernetes cluster is available:

export KUBECONFIG=$PWD/kubeconfig.conf
kubectl get nodes -o wide


if [ ${LETSENCRYPT_ENVIRONMENT} = "staging" ]; then
  wget -q -O tmp/fakelerootx1.pem
  sudo mkdir -pv /etc/docker/certs.d/harbor.${MY_DOMAIN}/
  sudo cp tmp/fakelerootx1.pem /etc/docker/certs.d/harbor.${MY_DOMAIN}/ca.crt
  for EXTERNAL_IP in $(kubectl get nodes --output=jsonpath="{.items[*].status.addresses[?(@.type==\"ExternalIP\")].address}"); do
    ssh -q -o StrictHostKeyChecking=no -l admin ${EXTERNAL_IP} \
      "sudo mkdir -p /etc/docker/certs.d/harbor.${MY_DOMAIN}/ && sudo wget -q -O /etc/docker/certs.d/harbor.${MY_DOMAIN}/ca.crt"
  echo "*** Done"