Categories
Kubernetes

Kubernetes: Istio Locality Based Load Balancing

Cluster

Create a Regional cluster with 3 zones (1 node per zone).

List the nodes and labels. We need this to understand which node belongs to which zone.

kubectl get nodes --label-columns failure-domain.beta.kubernetes.io/region,failure-domain.beta.kubernetes.io/zone
NAME STATUS ROLES AGE VERSION REGION ZONE
gke-itsmetommy-default-pool-5ba92622-dbgg Ready <none> 11m v1.16.8-gke.15 us-west1 us-west1-c
gke-itsmetommy-default-pool-65c4a9e0-g4l0 Ready <none> 11m v1.16.8-gke.15 us-west1 us-west1-b
gke-itsmetommy-default-pool-fee8f05c-3ghq Ready <none> 11m v1.16.8-gke.15 us-west1 us-west1-a

We can see that each node is located in a separate zone.

Istio

Note: I’ve observed that you must install 1.5.2 or above.

Install Istio.

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.6.0
export PATH=$PWD/bin:$PATH
istioctl verify-install
istioctl manifest apply --set profile=default

Create namespace locality.

kubectl create ns locality

Enable Istio on namespace locality.

kubectl label namespace locality istio-injection=enabled

Backend

Create backend:v1, backend:v2, backend:v3 folders.

{
  mkdir -p backend-v1/public-html
  mkdir -p backend-v2/public-html
  mkdir -p backend-v3/public-html
}

Create backend:v1, backend:v2, backend:v3 Dockerfiles.

cat <<EOF > backend-v1/Dockerfile
FROM httpd
COPY ./public-html/ /usr/local/apache2/htdocs/
EXPOSE 80
EOF
cat <<EOF > backend-v2/Dockerfile
FROM httpd
COPY ./public-html/ /usr/local/apache2/htdocs/
EXPOSE 80
EOF
cat <<EOF > backend-v3/Dockerfile
FROM httpd
COPY ./public-html/ /usr/local/apache2/htdocs/
EXPOSE 80
EOF

Create backend:v1, backend:v2, backend:v3 index.html.

{
  echo "Backend Version 1 us-west1-a" > backend-v1/public-html/index.html
  echo "Backend Version 2 us-west1-b" > backend-v2/public-html/index.html
  echo "Backend Version 3 us-west1-c" > backend-v3/public-html/index.html
}

Build backend:v1, backend:v2, backend:v3 images.

{
  docker build -t itsmetommy/backend:v1 backend-v1
  docker build -t itsmetommy/backend:v2 backend-v2
  docker build -t itsmetommy/backend:v3 backend-v3
}

Push backend:v1, backend:v2, backend:v3 images to Docker Hub.

{
  docker push itsmetommy/backend:v1
  docker push itsmetommy/backend:v2
  docker push itsmetommy/backend:v3
}

Deploy backend:v1, backend:v2, backend:v3 per zone.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-v1
namespace: locality
spec:
replicas: 1
selector:
matchLabels:
app: backend
version: v1
template:
metadata:
labels:
app: backend
version: v1
spec:
containers:
- image: itsmetommy/backend:v1
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 80
nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-west1-a --- apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-v2
namespace: locality
spec:
replicas: 1
selector:
matchLabels:
app: backend
version: v2
template:
metadata:
labels:
app: backend
version: v2
spec:
containers:
- image: itsmetommy/backend:v2
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 80
nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-west1-b --- apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-v3
namespace: locality
spec:
replicas: 1
selector:
matchLabels:
app: backend
version: v3
template:
metadata:
labels:
app: backend
version: v3
spec:
containers:
- image: itsmetommy/backend:v3
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 80
nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-west1-c
EOF

Create backend service.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: locality
labels:
app: backend
spec:
ports:
- name: http-web
port: 80
targetPort: 80
selector:
app: backend
EOF

Verify that each pod is running on a separate node, which at this point means that they are running in different zones.

kubectl get pods -l app=backend -n locality -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend-v1-699dcf4487-qmkqq 2/2 Running 0 24s 10.1.16.4 gke-itsmetommy-default-pool-0e8a1c3c-f5w4 <none> <none>
backend-v2-5879564fcb-8mwxg 2/2 Running 0 24s 10.1.18.6 gke-itsmetommy-default-pool-43f27618-czfr <none> <none>
backend-v3-64d6bcd449-dgd24 2/2 Running 0 23s 10.1.17.9 gke-itsmetommy-default-pool-64cb61f4-jgpc <none> <none>

Frontend

Create frontend folder.

mkdir frontend

Create frontend Dockerfile.

cat <<EOF > frontend/Dockerfile
FROM ubuntu:latest RUN apt-get update
RUN apt-get install -y curl
EOF

Build frontend image.

docker build -t itsmetommy/frontend frontend

Push frontend image to Docker Hub.

docker push itsmetommy/frontend

Create frontend depoloyment.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: locality
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: itsmetommy/frontend command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
name: frontend
EOF

List frontend pods. They should be running across all nodes.

kubectl get pods -l app=frontend -n locality -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-c59d5b9c5-cb49g 2/2 Running 0 50s 10.1.18.7 gke-itsmetommy-default-pool-43f27618-czfr <none> <none>
frontend-c59d5b9c5-q4bld 2/2 Running 0 50s 10.1.17.10 gke-itsmetommy-default-pool-64cb61f4-jgpc <none> <none>
frontend-c59d5b9c5-rsh55 2/2 Running 0 50s 10.1.16.5 gke-itsmetommy-default-pool-0e8a1c3c-f5w4 <none> <none>

Create variables per pod.

{
  POD_ZONE_1=$(kubectl get pods -n locality -l app=frontend -o jsonpath="{.items[0].metadata.name}")
  POD_ZONE_2=$(kubectl get pods -n locality -l app=frontend -o jsonpath="{.items[1].metadata.name}")
  POD_ZONE_3=$(kubectl get pods -n locality -l app=frontend -o jsonpath="{.items[2].metadata.name}")
}

Curl the backend from each frontend pod to see which backends responds. You will notice that they are load balanced in a round robin fashion.

for i in {1..10}
do
kubectl exec -it $POD_ZONE_1 -c frontend -n locality -- sh -c 'curl http://backend'
done Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
for i in {1..10}
do
kubectl exec -it $POD_ZONE_2 -c frontend -n locality -- sh -c 'curl http://backend'
done Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
for i in {1..10}
do
kubectl exec -it $POD_ZONE_3 -c frontend -n locality -- sh -c 'curl http://backend'
done Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b

Locality-Prioritized Load Balancing

Locality-prioritized load balancing is the default behavior for locality load balancing. In this mode, Istio tells Envoy to prioritize traffic to the workload instances most closely matching the locality of the Envoy sending the request. When all instances are healthy, the requests remains within the same locality. When instances become unhealthy, traffic spills over to instances in the next prioritized locality. This behavior continues until all localities are receiving traffic.

https://istio.io/docs/ops/configuration/traffic-management/locality-load-balancing/#locality-prioritized-load-balancing

Enable locality-prioritized load balancing by creating a VirtualService and DestinationRule.

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: backend
namespace: locality
spec:
hosts:
- backend
http:
- route:
- destination:
host: backend
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: backend
namespace: locality
spec:
host: backend
trafficPolicy:
outlierDetection:
consecutiveErrors: 7
interval: 30s
baseEjectionTime: 30s
EOF

Test each frontend response.

Run a curl test from each pod. You should see that all requests go to a single backend pod/zone.

for i in {1..10}
do
  kubectl exec -it $POD_ZONE_1 -c frontend -n locality -- sh -c 'curl  http://backend'
done
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
for i in {1..10}
  do kubectl exec -it $POD_ZONE_2 -c frontend -n locality -- sh -c 'curl  http://backend'
done
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
for i in {1..10}
do
  kubectl exec -it $POD_ZONE_3 -c frontend -n locality -- sh -c 'curl  http://backend'
done
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a

This shows that locality-prioritized load balancing is working correctly.

Locality-Weighted Load Balancing

Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form.

https://istio.io/docs/reference/config/networking/destination-rule/#LocalityLoadBalancerSetting

We will apply the following:

  • Route 80% of us-west1-a traffic to us-west1-a and 20% to us-west1-b
  • Route 80% of us-west1-b traffic to us-west1-b and 20% to us-west1-c
  • Route 80% of us-west1-c traffic to us-west1-c and 20% to us-west1-a

Update the DestinationRule.

kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: backend
namespace: locality
spec:
host: backend
trafficPolicy:
outlierDetection:
consecutiveErrors: 7
interval: 30s
baseEjectionTime: 30s
loadBalancer:
localityLbSetting:
enabled: true
distribute:
- from: us-west1/us-west1-a/*
to:
"us-west1/us-west1-a/*": 80
"us-west1/us-west1-b/*": 20
- from: us-west1/us-west1-b/*
to:
"us-west1/us-west1-b/*": 80
"us-west1/us-west1-c/*": 20 - from: us-west1/us-west1-c/*
to:
"us-west1/us-west1-c/*": 80
"us-west1/us-west1-a/*": 20
EOF

Test each frontend response.

Run a curl test from each pod. You should see that 80% of requests go to a single backend zone and 20% to another.

for i in {1..10}
do
  kubectl exec -it $POD_ZONE_1 -n locality -c frontend -- sh -c 'curl  http://backend'
done
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 2 us-west1-b
Backend Version 3 us-west1-c
Backend Version 2 us-west1-b
for i in {1..10}
do
  kubectl exec -it $POD_ZONE_2 -n locality -c frontend -- sh -c 'curl  http://backend'
done
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 3 us-west1-c
Backend Version 3 us-west1-c
for i in {1..10}
do
  kubectl exec -it $POD_ZONE_3 -n locality -c frontend -- sh -c 'curl  http://backend'
done
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 2 us-west1-b
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a
Backend Version 1 us-west1-a

This shows that locality-weighted load balancing is working correctly.

Errors

The DestinationRule "backend" is invalid:
* : Invalid value: "": "spec.trafficPolicy.loadBalancer" must validate one and only one schema (oneOf). Found none valid
* spec.trafficPolicy.loadBalancer.simple: Required value

Fix

Upgrade Istio to at least 1.5.4.

Clean up

kubectl delete ns locality
Comments

By Tommy Elmesewdy

DevOps Engineer