Overview

The AWS ELB add-on configures the AWS Load Balancer Controller for use by the cluster.

This add-on is required for routing external traffic to the cluster with:

Supported Providers

Prerequisites

Step 1: Enable the AWS ELB Add-On

The awsELB addon has one optional parameter of elbRole. If provided, the controller will this role to access the AWS API. If no role is provided then the recommended role from AWS will be used. The AWS Workload Identity Add-On is leveraged in the configuraiton of the load balancer controller to configure access for this role.

Using Manifest

Enable AWS ELB w/ a custom role

YAML
spec:
  ...
  addOns:
    awsELB:
      roleArn: 'arn:aws:iam::999999999999:role/custom-elb-addon-role'
  ...

Enable AWS ELB w/ the built-in role

YAML
spec:
  ...
  addOns:
    awsELB: {}
  ...

This add-on can be enabled at cluster creation or afterwards.

Using the UI

  1. Navigate to the Control Plane Console: Open Control Plane Console.
  2. Select Your Kubernetes Cluster: In the Control Plane Console, go to Kubernetes in the left sidebar, and click on the cluster you wish to configure.
  3. Enable the Add-on: Choose Add-ons, find the AWS ELB add-on in the list, and toggle it on.
  4. Optional: Enter Role ARN: Select the AWS ELB menu under Add-ons in the center pane. Provide the ROLE ARN necessary for AWS ELB Controller access to your AWS account.

Step 2: Subnet Configuration

The AWS Load Balancer Controller needs to know which subnets it is allowed to use for internet facing and internal load balancer resources for this cluster.

The following tags must be added to AWS Subnets in order to designate how they can be used:

Private subnet tag:

Key: “kubernetes.io/role/internal-elb” Value: “1”

Public subnet tag:

Key: “kubernetes.io/role/elb” Value: “1”

Step 3: Verify Controller Logs

After enabling the AWS ELB add-on, check to make sure that the controller is running and that it can access the AWS API using AWS Workload Identity. Connect to the MK8s Cluster using the Kubernetes Dashboard or the kubectl CLI.

Dashboard UI

  1. Select the kube-system namespace from the drop down menu on the top of the page.
  2. Select Pods on the left menu.
  3. In the center pane, find the aws-load-balancer-controller pod.
  4. Click the three dot menu on the right side of the pod and choose Logs.

A live log view will open. Inspect the output for any error messages.

kubectl

kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller -f

If the controller cannot access the AWS API, verify that you have performed Step 1 for the AWS Identity Add-On.

Aditional troubleshooting steps are available in this AWS Knowledge Center article.

Step 4 - Create Sample App

Create the following Service and Deployment in your Managed Kubernetes cluster. This example demonstrates creating a workload that is exposed externally.

YAML
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: LoadBalancer
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
        - name: httpbin
          image: kennethreitz/httpbin
          ports:
            - name: tcp
              containerPort: 80

Collect the endpoint

Once the objects are created, check the status of the Service. An Endpoint should be created for it automatically and listed under the External-IP column of the output.

kubectl get svc httpbin

Test the endpoint

curl http://${endpoint}/headers

Troubleshooting

Check the describe output of the service and the logs of the load balancer controller if the endpoint is not created or it is not working correctly.

kubectl describe svc httpbin
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller -f

Cleanup

kubectl delete svc,deployment httpbin