The AWS Load Balancer Controller is a critical component for running Kubernetes workloads on AWS, enabling seamless integration with AWS Elastic Load Balancers (ELB). This comprehensive tutorial will guide you from fundamental concepts to advanced configurations, ensuring you can effectively manage external traffic to your Kubernetes applications.
- Introduction to AWS Load Balancer Controller
- What is it?
- How it works
- Key Concepts (Ingress, Service, ALB, NLB, Target Groups, Annotations)
- Prerequisites
- Installation
- IAM Permissions (IAM Roles for Service Accounts - IRSA)
- Using Helm
- Using YAML (Manual Installation)
- Exposing Applications with ALB (Ingress)
- Basic Ingress Configuration
- Understanding ALB Target Types (Instance vs. IP)
- Configuring SSL/HTTPS (ACM Integration)
- Advanced Routing (Path-based, Host-based)
- Weighted Target Groups and Canary Deployments
- External DNS Integration (Route 53)
- Exposing Applications with NLB (Service Type LoadBalancer)
- Basic NLB Configuration
- NLB Target Types
- NLB with Security Groups
- Advanced Topics
- TargetGroupBinding
- Load Balancer Attributes
- WAF Integration
- Authentication (Cognito, OIDC)
- Inbound CIDRs and Security Groups
- Ingress Groups
- Security Best Practices
- Monitoring and Logging
- Troubleshooting Tips
- Real-World Use Cases
- Summary
- FAQ
The AWS Load Balancer Controller (formerly AWS ALB Ingress Controller) is a Kubernetes controller that manages AWS Elastic Load Balancers (Application Load Balancers - ALB and Network Load Balancers - NLB) for a Kubernetes cluster. It watches for Kubernetes Ingress and Service resources and provisions/configures the corresponding AWS load balancers, target groups, and listeners to route external traffic to your Kubernetes pods.
The controller acts as a bridge between your Kubernetes cluster and AWS ELB. Here's a simplified flow:
- Watch: The controller continuously watches the Kubernetes API server for
Ingress
resources (for ALBs) andService
resources oftype: LoadBalancer
(for NLBs). - Translate: When it detects a new or updated resource, it translates the Kubernetes object's configuration (including annotations) into AWS ELB API calls.
- Provision/Configure: It then provisions new ALBs/NLBs, creates target groups, registers Kubernetes nodes or pod IPs as targets, and sets up listeners and rules on the load balancer, all according to the specifications in your Kubernetes resources.
- Update DNS (optional): If integrated with ExternalDNS, it can also create/update Route 53 records to point to the newly created load balancer.
- Synchronize: The controller ensures that the state of your AWS load balancers remains synchronized with your Kubernetes resources. If you update or delete an Ingress or Service, the controller updates or deletes the corresponding AWS resources.
- Ingress: A Kubernetes API object that manages external access to services in a cluster, typically HTTP. The AWS Load Balancer Controller uses Ingress resources to provision and configure AWS Application Load Balancers (ALBs).
- Service (Type LoadBalancer): A Kubernetes Service of
type: LoadBalancer
creates an external load balancer (in this case, an AWS Network Load Balancer - NLB) that exposes your service externally. The AWS Load Balancer Controller provisions and manages NLBs for these Service types. - Application Load Balancer (ALB): An AWS Layer 7 (HTTP/HTTPS) load balancer. Ideal for complex routing rules, SSL termination, and content-based routing.
- Network Load Balancer (NLB): An AWS Layer 4 (TCP/UDP) load balancer. Provides ultra-high performance and static IP addresses.
- Target Groups: AWS constructs used by ALBs/NLBs to route traffic to registered targets (e.g., EC2 instances or IP addresses). The AWS Load Balancer Controller creates and manages these, registering your Kubernetes nodes or pod IPs.
- Annotations: Key-value pairs attached to Kubernetes resources (Ingress, Service, Pods). The AWS Load Balancer Controller heavily relies on annotations to provide granular control over the AWS resources it provisions. These annotations dictate everything from load balancer scheme (internal/internet-facing) to SSL certificates and security groups.
Before you begin, ensure you have the following:
- An active AWS account.
- An Amazon EKS cluster (recommended) or a self-managed Kubernetes cluster running on EC2 instances. This tutorial assumes an EKS cluster for simplicity.
kubectl
configured to connect to your Kubernetes cluster.helm
v3 installed (for Helm installation method).aws CLI
installed and configured with appropriate credentials.- IAM OIDC Provider associated with your EKS cluster: This is crucial for using IAM Roles for Service Accounts (IRSA), the recommended way to grant permissions to the controller. If you don't have one, create it using:
# Replace <cluster-name> and <region> eksctl utils associate-iam-oidc-provider --cluster=<cluster-name> --region=<region> --approve
- VPC CNI Plugin: Ensure your EKS cluster has the Amazon VPC CNI plugin installed and configured. This is essential, especially for
IP
target type.
The AWS Load Balancer Controller requires specific IAM permissions to interact with AWS APIs. The recommended way to grant these permissions is using IAM Roles for Service Accounts (IRSA).
-
Download the IAM Policy: The AWS Load Balancer Controller requires a specific IAM policy with permissions to create and manage ALBs, NLBs, Target Groups, Listeners, etc. Download the latest policy from the official GitHub repository:
# For a general AWS region (e.g., us-east-1, ap-northeast-1) curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
-
Create an IAM Policy in AWS:
aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam_policy.json
Note down the
Policy ARN
from the output, you'll need it in the next step. -
Create an IAM Service Account for the Controller: This step creates a Kubernetes Service Account and links it to an IAM Role, allowing the pods running the controller to assume this role.
# Replace <cluster-name>, <region>, and <AWS_ACCOUNT_ID> with your values export CLUSTER_NAME="your-eks-cluster-name" export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) export AWS_REGION="your-aws-region" # e.g., ap-northeast-1 eksctl create iamserviceaccount \ --cluster=$CLUSTER_NAME \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --attach-policy-arn=arn:aws:iam::$AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \ --override-existing-serviceaccounts \ --region $AWS_REGION \ --approve
This command uses
eksctl
to simplify the creation of the Service Account and the associated IAM Role, and attaches the policy.
Helm is the easiest way to install and manage the AWS Load Balancer Controller.
-
Add the EKS Helm Repository:
helm repo add eks https://aws.github.io/eks-charts helm repo update
-
Install the Controller: Use the
clusterName
and the Service Account created in the previous step.helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ -n kube-system \ --set clusterName=$CLUSTER_NAME \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller
-
Verify the Installation:
kubectl get deployment -n kube-system aws-load-balancer-controller kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
You should see the deployment and pods running.
While Helm is preferred, you can also install the controller manually using YAML manifests. This method is more involved and generally only recommended if you have specific customization needs not met by Helm.
-
Download the manifests: Go to the official GitHub repository's releases page (e.g.,
https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases
) and download thev2.x.x_full.yaml
file for the desired version. -
Modify the Manifests: You'll need to manually edit the
Deployment
manifest within the downloaded YAML to specify yourcluster-name
and the Service Account created earlier. Look for theDeployment
resource (kind:Deployment
) and modify theargs
section under thecontainers
field to include--cluster-name=<your-cluster-name>
. Also, ensure theserviceAccountName
field is set toaws-load-balancer-controller
.# Example snippet from the deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: aws-load-balancer-controller namespace: kube-system labels: app.kubernetes.io/name: aws-load-balancer-controller spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: aws-load-balancer-controller template: metadata: labels: app.kubernetes.io/name: aws-load-balancer-controller spec: serviceAccountName: aws-load-balancer-controller # Ensure this matches your SA containers: - name: controller image: public.ecr.aws/eks/aws-load-balancer-controller:v2.x.x # Use the correct version args: - --cluster-name=<your-cluster-name> # IMPORTANT: Replace with your cluster name - --enable-endpoint-slices # ... other arguments
-
Apply the Manifests:
kubectl apply -f v2.x.x_full.yaml
-
Verify Installation: Same as with Helm, check the deployment and pods.
The AWS Load Balancer Controller primarily uses Kubernetes Ingress
resources to provision Application Load Balancers.
Let's deploy a simple Nginx application and expose it via an ALB.
-
Deploy a Sample Application (Nginx):
# deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 --- # service.yaml apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: NodePort # Or ClusterIP if using IP target type
Apply these:
kubectl apply -f deployment.yaml -f service.yaml
-
Create an Ingress Resource:
# ingress-alb-basic.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: basic-nginx-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # Recommended for EKS/Fargate labels: app: nginx spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80
Apply this:
kubectl apply -f ingress-alb-basic.yaml
Explanation of Annotations:
kubernetes.io/ingress.class: alb
: This is the crucial annotation that tells the AWS Load Balancer Controller to handle this Ingress.alb.ingress.kubernetes.io/scheme: internet-facing
: Specifies that the ALB should be accessible from the internet. Useinternal
for an internal ALB.alb.ingress.kubernetes.io/target-type: ip
: This tells the ALB to register individual pod IPs as targets. This is generally recommended for EKS, especially with the Amazon VPC CNI. The alternative isinstance
, which registers the EC2 instances (nodes) as targets, requiring NodePort services.
-
Verify ALB Creation: Wait a few minutes for the ALB to provision. You can check the Ingress status:
kubectl get ingress basic-nginx-ingress
You should see an
ADDRESS
populated with the ALB DNS name. You can also verify in the AWS console under EC2 -> Load Balancers.
-
instance
(Default):- How it works: The ALB registers the Kubernetes worker nodes as targets. Traffic hits the node, and Kubernetes'
kube-proxy
then routes it to the correct pod via NodePort. - Pros: Simpler network configuration, can work with any CNI.
- Cons: Extra hop (NodePort), potential for higher latency, less granular health checks (health check is on NodePort, not directly on pod).
- When to use: If your CNI doesn't support direct pod IP routing, or for legacy reasons. Your Kubernetes Service must be of
type: NodePort
orLoadBalancer
.
- How it works: The ALB registers the Kubernetes worker nodes as targets. Traffic hits the node, and Kubernetes'
-
ip
(Recommended for EKS):- How it works: The ALB registers the individual pod IPs as targets. Traffic is routed directly to the pod. Requires the Amazon VPC CNI plugin.
- Pros: Direct routing to pods (single hop), lower latency, more granular health checks (directly on pod), required for Fargate profiles.
- Cons: Requires specific CNI plugin (Amazon VPC CNI).
- When to use: Almost always for EKS, especially with Fargate. Your Kubernetes Service can be
type: ClusterIP
.
The AWS Load Balancer Controller seamlessly integrates with AWS Certificate Manager (ACM) for SSL/TLS termination.
-
Request or Import an ACM Certificate: Ensure you have a valid SSL certificate in AWS Certificate Manager (ACM) for your domain (e.g.,
yourdomain.com
). Note its ARN. -
Update Ingress for HTTPS:
# ingress-alb-https.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: https-nginx-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<region>:<account-id>:certificate/<certificate-id> # REPLACE with your ACM ARN alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08 # Recommended SSL policy alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' spec: rules: - host: yourdomain.com # REPLACE with your domain http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80 # This section creates a default backend for the HTTP listener to redirect traffic # The `ssl-redirect` action above will handle the actual redirection defaultBackend: service: name: nginx-service port: number: 80
Apply this and update your DNS to point
yourdomain.com
to the ALB DNS name. Now, requests tohttp://yourdomain.com
will be redirected tohttps://yourdomain.com
, and HTTPS traffic will be terminated at the ALB using your ACM certificate.
ALBs excel at advanced routing.
-
Path-based Routing: Route traffic to different services based on URL paths.
# ingress-alb-path-based.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: path-based-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: rules: - http: paths: - path: /app1/* pathType: Prefix backend: service: name: service-app1 # Your service for app1 port: number: 80 - path: /app2/* pathType: Prefix backend: service: name: service-app2 # Your service for app2 port: number: 80 - path: /* # Default path pathType: Prefix backend: service: name: nginx-service port: number: 80
-
Host-based Routing: Route traffic to different services based on the hostname.
# ingress-alb-host-based.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: host-based-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: rules: - host: app1.yourdomain.com # REPLACE with your domain http: paths: - path: / pathType: Prefix backend: service: name: service-app1 port: number: 80 - host: app2.yourdomain.com # REPLACE with your domain http: paths: - path: / pathType: Prefix backend: service: name: service-app2 port: number: 80
ALBs support weighted target groups, allowing you to split traffic between different backends. This is excellent for canary deployments or A/B testing.
# ingress-alb-canary.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-deployment-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/actions.canary-rule: '{"Type":"forward","ForwardConfig":{"TargetGroups":[{"ServiceName":"nginx-service-v1","ServicePort":"80","Weight":90},{"ServiceName":"nginx-service-v2","ServicePort":"80","Weight":10}]}}'
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: canary-rule # Placeholder, the action annotation handles routing
port:
name: use-annotation
In this example, nginx-service-v1
gets 90% of traffic, and nginx-service-v2
(your canary) gets 10%. You would define nginx-service-v1
and nginx-service-v2
as separate Kubernetes Services pointing to different deployments.
While not part of the AWS Load Balancer Controller itself, external-dns
is often used alongside it to automatically create and manage Route 53 records for your load balancers.
-
Install ExternalDNS: Refer to the
external-dns
documentation for installation. -
Annotate your Ingress: Add the
external-dns.alpha.kubernetes.io/hostname
annotation to your Ingress.# ingress-with-external-dns.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress-dns annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip external-dns.alpha.kubernetes.io/hostname: yourdomain.com # REPLACE with your domain spec: rules: - host: yourdomain.com http: paths: - path: / pathType: Prefix backend: service: name: nginx-service port: number: 80
external-dns
will detect this annotation and create an A record in Route 53 pointingyourdomain.com
to the ALB's DNS name.
The AWS Load Balancer Controller also supports provisioning Network Load Balancers (NLBs) for Kubernetes Services of type LoadBalancer
.
-
Create a Service of Type LoadBalancer:
# service-nlb-basic.yaml apiVersion: v1 kind: Service metadata: name: nginx-nlb-service annotations: service.beta.kubernetes.io/aws-load-balancer-type: external # explicitly for NLB # service.beta.kubernetes.io/aws-load-balancer-internal: "true" # Uncomment for internal NLB # service.beta.kubernetes.io/aws-load-balancer-target-type: ip # Uncomment for IP target type labels: app: nginx spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
Apply this:
kubectl apply -f service-nlb-basic.yaml
-
Verify NLB Creation: After a few moments, check the service status:
kubectl get svc nginx-nlb-service
You'll see an
EXTERNAL-IP
(an IP address, not a DNS name, for NLBs) populated. You can also verify in the AWS console under EC2 -> Load Balancers.
Similar to ALBs, NLBs support instance
and ip
target types.
-
instance
(Default for NLB Services):- Registers nodes as targets.
- Traffic goes to the node's NodePort and then to the pod.
-
ip
:- Registers pod IPs as targets.
- Requires
service.beta.kubernetes.io/aws-load-balancer-target-type: ip
annotation. - Essential for Fargate.
As of AWS Load Balancer Controller v2.6+, NLBs support security groups.
# service-nlb-sg.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nlb-sg-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-0xxxxxxxxxxxxxxx,sg-0yyyyyyyyyyyyyyy # REPLACE with your Security Group IDs
# To automatically manage backend security group rules (allowing NLB to reach pods/nodes)
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "true"
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
The AWS Load Balancer Controller offers a rich set of annotations for fine-grained control.
TargetGroupBinding
is a custom resource (CRD) that allows you to register Kubernetes pods to an existing, pre-created AWS Target Group. This is useful for scenarios where you need to integrate existing AWS infrastructure or have external services that need to point to your Kubernetes pods.
-
Define a TargetGroupBinding:
# targetgroupbinding.yaml apiVersion: elbv2.k8s.aws/v1beta1 kind: TargetGroupBinding metadata: name: my-tg-binding spec: serviceRef: name: nginx-service # The Kubernetes Service whose pods will be registered port: 80 targetGroupARN: arn:aws:elasticloadbalancing:<region>:<account-id>:targetgroup/my-existing-tg/xxxxxxxxxxxxxx # REPLACE with your existing Target Group ARN # Optional: specify target type # targetType: ip # Optional: Health check configurations # healthCheck: # path: /healthz # protocol: HTTP
Apply this. The controller will then register the pods backing
nginx-service
to your specifiedtargetGroupARN
.
You can configure various ALB/NLB attributes using annotations.
-
Idle Timeout:
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=300
-
Deletion Protection:
alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true
-
Access Logs:
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-alb-logs-bucket,access_logs.s3.prefix=nginx
Integrate your ALB with AWS WAF for web application firewall protection.
alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:<region>:<account-id>:webacl/my-web-acl/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # REPLACE with your WAF ACL ARN
ALBs can authenticate users before forwarding requests to your services using Amazon Cognito or OpenID Connect (OIDC).
# Ingress with Cognito Authentication
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auth-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/actions.authenticate: |
{"Type": "authenticate-cognito",
"AuthenticateCognitoConfig": {
"UserPoolArn": "arn:aws:cognito-idp:<region>:<account-id>:userpool/...", # REPLACE
"UserPoolClientId": "...", # REPLACE
"UserPoolDomain": "...", # REPLACE
"Scope": "openid profile email",
"SessionCookieName": "AWSCognitoAuthCookie",
"SessionTimeout": 3600,
"OnUnauthenticatedRequest": "authenticate"
}}
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: authenticate # Placeholder for the action
port:
name: use-annotation
Control network access to your load balancer.
-
Restrict by CIDR:
alb.ingress.kubernetes.io/inbound-cidrs: 192.168.1.0/24,10.0.0.0/16
-
Specify existing Security Groups:
alb.ingress.kubernetes.io/security-groups: sg-0abcdef1234567890, my-existing-sg-name # Use IDs or names (names resolve to tags)
Group multiple Ingress resources under a single ALB. This is useful for managing multiple applications that share the same external hostname or need to be exposed through a single ALB.
# Ingress 1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-group-app1
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: my-shared-alb-group # All Ingresses with this name share an ALB
alb.ingress.kubernetes.io/group.order: "10" # Order matters for rule precedence
labels:
app: app1
spec:
rules:
- host: myapp.yourdomain.com
http:
paths:
- path: /app1/*
pathType: Prefix
backend:
service:
name: service-app1
port:
number: 80
---
# Ingress 2
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-group-app2
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: my-shared-alb-group
alb.ingress.kubernetes.io/group.order: "20"
labels:
app: app2
spec:
rules:
- host: myapp.yourdomain.com
http:
paths:
- path: /app2/*
pathType: Prefix
backend:
service:
name: service-app2
port:
number: 80
This will create a single ALB for myapp.yourdomain.com
with rules defined by both ingress-group-app1
and ingress-group-app2
.
- IAM Roles for Service Accounts (IRSA): Always use IRSA instead of attaching IAM policies directly to EC2 instance roles. This limits the blast radius if a pod is compromised.
- Least Privilege: Customize the IAM policy for the controller to grant only the necessary permissions. The default policy is quite broad. For example, scope down
ec2:AuthorizeSecurityGroupIngress
andec2:RevokeSecurityGroupIngress
based on VPC ID or cluster name resource tags. - Private Load Balancers: Use
alb.ingress.kubernetes.io/scheme: internal
for internal applications to prevent exposure to the public internet. - Security Groups: Explicitly define
alb.ingress.kubernetes.io/security-groups
andalb.ingress.kubernetes.io/inbound-cidrs
to restrict access to your load balancers. - SSL Policy: Always specify a strong SSL policy (e.g.,
ELBSecurityPolicy-TLS13-1-2-2021-06
orELBSecurityPolicy-2016-08
) usingalb.ingress.kubernetes.io/ssl-policy
. - WAF Integration: For public-facing applications, enable AWS WAF integration.
- Regular Updates: Keep the AWS Load Balancer Controller updated to the latest stable version to benefit from security patches and new features.
- Controller Logs:
Check the controller's logs for errors or events:
kubectl logs -f -n kube-system deploy/aws-load-balancer-controller
- CloudWatch Metrics: ALBs and NLBs automatically push metrics to Amazon CloudWatch. You can monitor request counts, latency, healthy/unhealthy host counts, and more.
- Access Logs:
Enable ALB access logs to S3 (using
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,...
) for detailed HTTP request information. - CloudTrail: All API calls made by the AWS Load Balancer Controller to AWS (e.g., creating ALBs, modifying security groups) are logged in AWS CloudTrail, providing an audit trail.
- Prometheus: The controller exposes Prometheus metrics. You can scrape these metrics and visualize them in Grafana for insights into controller operations.
- Check Controller Logs: This is the first place to look. Error messages in the controller logs usually pinpoint the issue (e.g., insufficient permissions, invalid annotations, AWS API errors).
- Verify IAM Permissions: Ensure the
aws-load-balancer-controller
Service Account has the correct IAM policy attached and that the IAM OIDC provider is correctly configured. - Inspect Ingress/Service Events:
Look for "Events" at the bottom of the output for messages from the controller.
kubectl describe ingress <ingress-name> kubectl describe svc <service-name>
- Check AWS Console:
- Verify if the ALB/NLB, Target Groups, Listeners, and Security Groups are created as expected.
- Check the target group health status in the AWS console. Are your nodes/pods registered and healthy?
- Pod Network Connectivity:
If using
target-type: ip
, ensure your CNI (VPC CNI) is working correctly and pods have routable IP addresses. - Annotations Syntax: Double-check your annotations for typos or incorrect values. Refer to the official AWS Load Balancer Controller documentation for a complete list and syntax.
- Security Group Rules: Ensure that the security groups associated with your ALB/NLB allow inbound traffic on the expected ports and that your worker node/pod security groups allow inbound traffic from the load balancer's security group.
- Resource Limits: If your cluster is under heavy load or resource constraints, the controller pods might not have enough resources to operate correctly.
- Microservices Exposure: Exposing individual microservices with dedicated ALBs or shared ALBs using path/host-based routing.
- Canary Deployments: Gradually rolling out new versions of applications by shifting a small percentage of traffic to the new version using weighted target groups.
- A/B Testing: Directing specific user segments or percentages of traffic to different application versions for testing.
- Blue/Green Deployments: Creating a completely new environment (Blue) and switching traffic over from the old (Green) once validated.
- Internal Application Access: Provisioning internal ALBs/NLBs for applications only accessible within your VPC.
- Hybrid Cloud Scenarios: Using NLBs with PrivateLink to expose Kubernetes services to on-premises data centers.
- Secure Public Applications: Integrating with AWS WAF, ACM for SSL/TLS, and fine-grained security group controls for robust security.
The AWS Load Balancer Controller is an indispensable tool for running Kubernetes on AWS. It automates the provisioning and management of Application Load Balancers and Network Load Balancers, greatly simplifying the process of exposing your applications to external traffic. By leveraging Kubernetes Ingress and Service resources, combined with powerful annotations, you gain fine-grained control over your load balancing infrastructure, enabling advanced routing, SSL management, security, and integration with other AWS services like ACM, WAF, and Route 53. Understanding its core concepts, installation methods, and extensive annotation capabilities is key to building scalable, resilient, and secure Kubernetes applications on AWS.
Q1: Can I use the AWS Load Balancer Controller with self-managed Kubernetes on EC2, not just EKS?
A1: Yes, absolutely. While commonly used with EKS, the controller can be deployed on any Kubernetes cluster running on AWS EC2 instances, provided you have the necessary IAM permissions configured and a compatible CNI plugin (like the Amazon VPC CNI for IP
target type).
Q2: What's the difference between alb.ingress.kubernetes.io/target-type: instance
and ip
? Which should I use?
A2: instance
registers your Kubernetes nodes as targets, and traffic routes through NodePorts. ip
registers individual pod IPs as targets, routing traffic directly to the pod. For EKS, ip
is generally recommended as it provides direct routing, lower latency, and is required for Fargate. Use instance
if your CNI doesn't support direct pod IP routing or if you have specific network requirements.
Q3: How do I manage SSL certificates with the AWS Load Balancer Controller?
A3: The controller integrates with AWS Certificate Manager (ACM). You simply provide the ARN of your ACM certificate in the alb.ingress.kubernetes.io/certificate-arn
annotation on your Ingress resource. The controller handles the SSL termination at the ALB.
Q4: Can I use one ALB for multiple Kubernetes Ingresses?
A4: Yes, using the alb.ingress.kubernetes.io/group.name
annotation, you can group multiple Ingress resources. The controller will create a single ALB for all Ingresses within that group, and their rules will be merged onto that ALB. This is useful for sharing a single hostname across multiple microservices.
Q5: What happens if I delete an Ingress or Service that was managed by the controller? A5: The AWS Load Balancer Controller will automatically de-provision the associated AWS resources (ALB/NLB, Target Groups, Listeners, Rules, Security Groups) that it created. This ensures proper cleanup of your AWS environment.
Q6: How do I troubleshoot if my ALB isn't provisioning or routing traffic correctly?
A6: Start by checking the controller's logs (kubectl logs -f -n kube-system deploy/aws-load-balancer-controller
). Then, examine the events of your Ingress or Service (kubectl describe ingress <name>
). Finally, verify the state of resources directly in the AWS EC2 console (Load Balancers, Target Groups, Security Groups). Incorrect IAM permissions or annotation syntax are common culprits.
Q7: Can I use existing AWS Load Balancers with the controller?
A7: The AWS Load Balancer Controller primarily provisions new load balancers. However, with TargetGroupBinding
, you can register Kubernetes pods to existing AWS Target Groups. This allows you to integrate your Kubernetes services into pre-existing load balancing infrastructure or use external load balancers not managed by the controller.
Q8: Does the AWS Load Balancer Controller support IPv6?
A8: Yes, the controller supports IPv6 for ALBs and NLBs. You would typically enable dual-stack or IPv6-only VPCs and configure the load balancer accordingly via annotations. For ALBs, alb.ingress.kubernetes.io/ip-address-type: dualstack
would be used.
Q9: How do I handle sticky sessions with the AWS Load Balancer Controller?
A9: For ALBs, sticky sessions (session affinity) can be enabled using the alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.type=lb_cookie,stickiness.duration_seconds=600
annotation. Remember that target-type: ip
is required for sticky sessions to work reliably with ALBs.