Apache Traffic Server Ingress Controller for Kubernetes

Clone this repo:

Branches

  1. b13d80f chore: Upgrade actions-setup-minikube to v2.0.1 (#46) by Marc Nuri · 10 days ago master
  2. 6b47225 Upgrade ATS to 8.1.0 (#45) by Kit Chan · 6 weeks ago
  3. 80bd68c Update and rename charts/TODO.md to charts/README.md (#42) by Kit Chan · 10 weeks ago
  4. 5c1ff59 Create ingress log directory (#41) by Kit Chan · 2 months ago
  5. ca13bf0 Integrate fluentd (#40) by Rishabh Chhabra · 2 months ago

ATS Kubernetes Ingress Controller

Test Build and Integrate

Contents

Introduction

Apache Traffic Server (ATS) is a high performance, open-source, caching proxy server that is scalable and configurable. This project uses ATS as a Kubernetes(K8s) ingress

Abstract

From high-level, the ingress controller talks to K8s' API and sets up watchers on specific resources that are interesting to ATS. Then, the controller controls ATS by either(1) relay the information from K8s API to ATS, or (2) configure ATS directly.

How

Versions of Software Used

  • Alpine 3.12
  • Apache Traffic Server 8.1.0
  • LuaJIT 2.0.4
  • Lua 5.1.4
  • Go 1.12.8
  • Other Packages
    • luasocket 3.0rc1
    • redis-lua 2.0.4

How to use

Requirements

  • Docker
  • Kubernetes 1.18 (Minikube 1.11)

To install Docker, visit its official page and install the correct version for your system.

The walkthrough uses Minikube to guide you through the setup process. Visit the official Minikube page to install Minikube.

Download project

If you are cloning this project for development, visit Setting up Go-Lang for detailed guide on how to develop projects in Go.

For other purposes, you can use git clone or directly download repository to your computer.

Example Walkthrough

Once you have cloned the project repo and started Docker and Minikube, in the terminal:

  1. $ eval $(minikube docker-env)
  2. $ cd trafficserver-ingress-controller
  3. $ git submodule update --init
  4. $ docker build -t ats_alpine .
  5. $ docker build -t tsexporter k8s/backend/trafficserver_exporter/
  6. $ docker build -t node-app-1 k8s/backend/node-app-1/
  7. $ docker build -t node-app-2 k8s/backend/node-app-2/
  8. $ docker pull fluent/fluentd:v1.6-debian-1
  • At this point, we have created necessary images for our example. Let's talk about what each step does:
    • Step 4 builds an image to create a Docker container that will contain the Apache Traffic Server (ATS) itself, the kubernetes ingress controller, along with other software required for the controller to do its job.
    • Step 5 builds an image for the trafficserver exporter. This exports the ATS statistics over HTTP for Prometheus to read.
    • Steps 6 and 7 build 2 images that will serve as backends to kubernetes services which we will shortly create
  1. $ kubectl create namespace trafficserver-test
    • Create a namespace for ATS pod
  2. $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=atssvc/O=atssvc"
    • Create a self-signed certificate
  3. $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt -n trafficserver-test --dry-run=client -o yaml | kubectl apply -f -
    • Create a secret in the namespace just created
  4. $ kubectl apply -f k8s/configmaps/fluentd-configmap.yaml
    • Create config map for fluentd
  5. $ kubectl apply -f k8s/traffic-server/
    • will define a new kubernetes namespace named trafficserver-test and deploy a single ATS pod to said namespace. The ATS pod is also where the ingress controller lives.

Proxy

The following steps can be executed in any order, thus list numbers are not used.

  • $ kubectl apply -f k8s/apps/

    • creates namespaces trafficserver-test-2 and trafficserver-test-3 if not already exist
    • creates kubernetes services and deployments for appsvc1 and appsvc2
    • deploy 2 of each appsvc1, and appsvc2 pods in trafficserver-test-2, totally 4 pods in said namespace.
    • similarly, deploy 2 of each appsvc1, and appsvc2 pods in trafficserver-test-3, totally 4 pods in this namespace. We now have 8 pods in total for the 2 services we have created and deployed in the 2 namespaces.
  • $ kubectl apply -f k8s/ingresses/

    • creates namespaces trafficserver-test-2 and trafficserver-test-3 if not already exist
    • defines an ingress resource in both trafficserver-test-2 and trafficserver-test-3
    • the ingress resource in trafficserver-test-2 defines domain name test.media.com with /app1 and /app2 as its paths
    • both ingress resources define domain name test.edge.com; however, test.edge.com/app1 is only defined in trafficserver-test-2 and test.edge.com/app2 is only defined in trafficserver-test-3
    • Addtionally, an ingress resources defines HTTPS access for test.edge.com/app2 in namespace trafficserver-test-3

When both steps above have executed at least once, ATS proxying will have started to work. To see proxy in action, we can use curl:

  1. $ curl -vH "HOST:test.media.com" "$(minikube ip):30000/app1"
  2. $ curl -vH "HOST:test.media.com" "$(minikube ip):30000/app2"
  3. $ curl -vH "HOST:test.edge.com" "$(minikube ip):30000/app1"
  4. $ curl -vH "HOST:test.edge.com" "$(minikube ip):30000/app2"
  5. $ curl -vH "HOST:test.edge.com" -k "https://$(minikube ip):30043/app2"

ConfigMap

Below is an example of configuring Apache Traffic Server reloadable configurations using kubernetes configmap resource:

  • $ kubectl apply -f k8s/configmaps/ats-configmap.yaml
    • create a ConfigMap resource in trafficserver-test with the annotation "ats-configmap":"true" if not already exist
    • configure 3 reloadable ATS configurations:
      1. proxy.config.output.logfile.rolling_enabled: "1"
      2. proxy.config.output.logfile.rolling_interval_sec: "3000"
      3. proxy.config.restart.active_client_threshold: "0"

Snippet

You can attach ATS lua script to an ingress object and ATS will execute it for requests matching the routing rules defined in the ingress object. See an example in annotation section of yaml file here

Ingress Class

You can provide an environment variable called INGRESS_CLASS in the deployment to specify the ingress class. Only ingress object with annotation kubernetes.io/ingress.class with value equal to the environment variable value will be used by ATS for routing

Logging and Monitoring

Fluentd

This project ships with Fluentd already integrated with the Apache Traffic Server. The configuration file used for the same can be found here

As can be seen from the default configuration file, Fluentd reads the Apache Traffic Server access logs located at /usr/local/var/log/trafficserver/squid.log and outputs them to stdout. The ouput plugin for Fluentd can be changed to send the logs to any desired location supported by Fluentd including Elasticsearch, Kafka, MongoDB etc. You can read more about output plugins here.

Prometheus and Grafana

Use the following steps to install Prometheus and Grafana and use them to monitor the Apache Traffic Server statistics.

  1. $ kubectl apply -f k8s/prometheus/ats-stats.yaml
  • Creates a new service which connects to the ATS pod on port 9122. This service will be used by Prometheus to read the Apache Traffic Server stats.
  1. $ kubectl apply -f k8s/configmaps/prometheus-configmap.yaml
  • Creates a new configmap which holds the configuration file for Prometheus. You can modify this configuration file to suit your needs. More about that can be read here
  1. $ kubectl apply -f k8s/prometheus/prometheus-deployment.yaml
  • Creates a new deployment consisting of Prometheus and Grafana. Also creates two new services to access prometheus and grafana.
  1. Open x.x.x.x:30090 in your web browser to access Prometheus where x.x.x.x is the IP returned by the command: $ minikube ip
  2. Open x.x.x.x:30030 in your web browser to access the Grafana dashboard where x.x.x.x is the IP returned by the command: $ minikube ip.
  3. The default credentials for logging into Grafana are admin:admin
  4. Click on `Add your first data source' and select Prometheus under the ‘Time series databases category’
  5. Set an appropriate name for the data source and enter localhost:9090 as the URL New Datasource
  6. Click on ‘Save & Test’. If everything has been installed correctly you should get a notification saying ‘Data source is working’ Datasource add success
  7. Click on the ‘+’ icon in the left handside column and select ‘Dashboard’
  8. Click on ‘+ Add new panel’
  9. Enter a PromQL query. For example if you want to add a graph showing the total number of responses over time enter trafficserver_responses_total and press Shift + Enter.