This cluster addon is composed of:
It relies on the Ingress resource only available in Kubernetes version 1.1 and beyond.
Before you can receive traffic through the GCE L7 Loadbalancer Controller you need:
kubectl
at Service creation timekube-system
namespace)GLBC is not aware of your GCE quota. As of this writing users get 3 GCE Backend Services by default. If you plan on creating Ingresses for multiple Kubernetes Services, remember that each one requires a backend service, and request quota. Should you fail to do so the controller will poll periodically and grab the first free backend service slot it finds. You can view your quota:
$ gcloud compute project-info describe --project myproject
See GCE documentation for how to request more.
It takes ~1m to spin up a loadbalancer (this includes acquiring the public ip), and ~5-6m before the GCE api starts healthchecking backends. So as far as latency goes, here's what to expect:
Assume one creates the following simple Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: # This will just loopback to the default backend of GLBC serviceName: default-http-backend servicePort: 80
time, t=0
$ kubectl get ing NAME RULE BACKEND ADDRESS test-ingress - default-http-backend:80 $ kubectl describe ing No events.
time, t=1m
$ kubectl get ing NAME RULE BACKEND ADDRESS test-ingress - default-http-backend:80 130.211.5.27 $ kubectl describe ing target-proxy: k8s-tp-default-test-ingress url-map: k8s-um-default-test-ingress backends: {"k8s-be-32342":"UNKNOWN"} forwarding-rule: k8s-fw-default-test-ingress Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
time, t=5m
$ kubectl describe ing target-proxy: k8s-tp-default-test-ingress url-map: k8s-um-default-test-ingress backends: {"k8s-be-32342":"HEALTHY"} forwarding-rule: k8s-fw-default-test-ingress Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
Since GLBC runs as a cluster addon, you cannot simply delete the RC. The easiest way to disable it is to do as follows:
IFF you want to tear down existing L7 loadbalancers, hit the /delete-all-and-quit endpoint on the pod:
$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE l7-lb-controller-7bb21 1/1 Running 0 1h $ kubectl exec l7-lb-controller-7bb21 -c l7-lb-controller curl http://localhost:8081/delete-all-and-quit --namespace=kube-system $ kubectl logs l7-lb-controller-7b221 -c l7-lb-controller --follow ... I1007 00:30:00.322528 1 main.go:160] Handled quit, awaiting pod deletion.
Nullify the RC (but don't delete it or the addon controller will “fix” it for you)
$ kubectl scale rc l7-lb-controller --replicas=0 --namespace=kube-system
--health-check-path argument
.