Tutorial

Requirements

  • Docker
  • Kubernetes 1.18.10 (through Minikube 1.14.2)

To install Docker, visit its official page and install the correct version for your system.

The walkthrough uses Minikube to guide you through the setup process. Visit the official Minikube page to install Minikube.

Download project

You can use git clone to download repository to your computer.

Example Walkthrough

Setting Up Proxy

Once you have cloned the project repo and started Docker and Minikube, in the terminal:

  1. $ eval $(minikube docker-env)
  2. $ cd trafficserver-ingress-controller
  3. $ git submodule update --init
  4. $ docker build -t ats-ingress .
  5. $ docker build -t ats-ingress-exporter k8s/images/trafficserver_exporter/
  6. $ docker build -t node-app-1 k8s/images/node-app-1/
  7. $ docker build -t node-app-2 k8s/images/node-app-2/
  8. $ docker pull fluent/fluentd:v1.6-debian-1
  • At this point, we have created necessary images for our example:
    • Step 4 builds an image to create a Docker container that will contain the Apache Traffic Server (ATS) itself, the kubernetes ingress controller, along with other software required for the controller to do its job.
    • Step 5 builds an image for the trafficserver exporter. This exports the ATS statistics for Prometheus to read. It uses the Stats Over HTTP Plugin
    • Steps 6 and 7 build 2 images that will serve as backends to kubernetes services which we will shortly create
    • Step 8 builds an image for fluentd. This is for log collection.
  1. $ kubectl create namespace trafficserver-test
  2. $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=atssvc/O=atssvc"
  3. $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt -n trafficserver-test --dry-run=client -o yaml | kubectl apply -f -
  4. $ kubectl apply -f k8s/configmaps/fluentd-configmap.yaml
  5. $ kubectl apply -f k8s/traffic-server/
  • Now we have an ATS running inside the cluster.
    • Step 9 creates a namespace for ATS pod.
    • Steps 10 and 11 create a self-signed SSL certificate and keep it in a secret inside the namespace above.
    • Step 12 provides the ConfigMap for configuration options for fluentd
    • Step 13 deploys a single ATS pod to said namespace. The ATS pod is also where the ingress controller lives.

Setting Up Backend Applications

The following steps can be executed in any order

  • $ kubectl apply -f k8s/apps/

    • creates namespaces trafficserver-test-2 and trafficserver-test-3 if not already exist
    • creates kubernetes services and deployments for appsvc1 and appsvc2
    • deploy 2 of each appsvc1, and appsvc2 pods in trafficserver-test-2, totally 4 pods in said namespace.
    • similarly, deploy 2 of each appsvc1, and appsvc2 pods in trafficserver-test-3, totally 4 pods in this namespace. We now have 8 pods in total for the 2 services we have created and deployed in the 2 namespaces.
  • $ kubectl apply -f k8s/ingresses/

    • creates namespaces trafficserver-test-2 and trafficserver-test-3 if not already exist
    • defines an ingress resource in both trafficserver-test-2 and trafficserver-test-3
    • the ingress resource in trafficserver-test-2 defines domain name test.media.com with /app1 and /app2 as its paths
    • both ingress resources define domain name test.edge.com; however, test.edge.com/app1 is only defined in trafficserver-test-2 and test.edge.com/app2 is only defined in trafficserver-test-3
    • Addtionally, an ingress resources defines HTTPS access for test.edge.com/app2 in namespace trafficserver-test-3

Checking Results

ATS proxying should have started to work. To see proxy in action, we can use curl:

  1. $ curl -vH "HOST:test.media.com" "$(minikube ip):30080/app1"
  2. $ curl -vH "HOST:test.media.com" "$(minikube ip):30080/app2"
  3. $ curl -vH "HOST:test.edge.com" "$(minikube ip):30080/app1"
  4. $ curl -vH "HOST:test.edge.com" "$(minikube ip):30080/app2"
  5. $ curl -vH "HOST:test.edge.com" -k "https://$(minikube ip):30443/app2"

You may have problem with minikube using docker driver as localhost (i.e. 127.0.0.1) will be used as the cluster ip. So you will need to forward the traffic designated for the port to the ports of the ATS pods inside the cluster before the above curl commands will work. Each command below needs to be run in separate terminal.

  • $ kubectl port-forward <pod name> 30443:8443 -n trafficserver-test
  • $ kubectl port-forward <pod name> 30080:8080 -n trafficserver-test

ConfigMap

Below is an example of configuring Apache Traffic Server reloadable configurations using kubernetes configmap resource:

  • $ kubectl apply -f k8s/configmaps/ats-configmap.yaml
    • create a ConfigMap resource in trafficserver-test with the annotation "ats-configmap":"true" if not already exist
    • configure 3 reloadable ATS configurations:
      1. proxy.config.output.logfile.rolling_enabled: "1"
      2. proxy.config.output.logfile.rolling_interval_sec: "3000"
      3. proxy.config.restart.active_client_threshold: "0"

Namespaces for Ingresses

You can specifiy the list of namespaces to look for ingress object by providing INGRESS_NS. The default is all, which tells the controller to look for ingress objects in all namespaces. Alternatively you can provide a comma-separated list of namespaces for the controller to look for ingresses. Similarly you can specifiy a comma-separated list of namespaces to ignore while the controller is looking for ingresses by providing INGRESS_IGNORE_NS.

Snippet

You can attach ATS lua script to an ingress object and ATS will execute it for requests matching the routing rules defined in the ingress object. See an example in annotation section of yaml file here

Ingress Class

You can provide an environment variable called INGRESS_CLASS in the deployment to specify the ingress class. See an example commented out here. Only ingress object with annotation kubernetes.io/ingress.class with value equal to the environment variable value will be used by ATS for routing

Customizing Logging and TLS

You can specify a different logging.yaml and ssl_server_name.yaml by providing environment variable LOG_CONFIG_FNAME and SSL_SERVER_FNAME respsectively. See an example commented out here. The new contents of them can be provided through a ConfigMap and loaded to a volume mounted for the ATS container (Example here ). Similarly certificates needed for the connection between ATS and origin can be provided through a Secret that loaded to a volume mounted for the ATS container as well (Example here ). To refresh these certificates we may need to override the entrypoint with our own command and add extra script to watch for changes in those secret in order to reload ATS (Example here ).

Customizing Plugins

You can specify extra plugins for plugin.config by providing environment variable EXTRA_PLUGIN_FNAME. Its contents can be provided through a ConfigMap and loaded to a volume mounted for the ATS container (Example here ).

Logging and Monitoring

Fluentd

The above tutorial is already integrated with Fluentd. The configuration file used for the same can be found here

As can be seen from the default configuration file, Fluentd reads the Apache Traffic Server access logs located at /opt/ats/var/log/trafficserver/squid.log and outputs them to stdout. The ouput plugin for Fluentd can be changed to send the logs to any desired location supported by Fluentd including Elasticsearch, Kafka, MongoDB etc. You can read more about output plugins here.

Prometheus and Grafana

Use the following steps to install Prometheus and Grafana and use them to monitor the Apache Traffic Server statistics.

  1. $ kubectl apply -f k8s/prometheus/ats-stats.yaml
  • Creates a new service which connects to the ATS pod on port 9122. This service will be used by Prometheus to read the Apache Traffic Server stats.
  1. $ kubectl apply -f k8s/configmaps/prometheus-configmap.yaml
  • Creates a new configmap which holds the configuration file for Prometheus. You can modify this configuration file to suit your needs. More about that can be read here
  1. $ kubectl apply -f k8s/prometheus/prometheus-deployment.yaml
  • Creates a new deployment consisting of Prometheus and Grafana. Also creates two new services to access prometheus and grafana.
  1. Open x.x.x.x:30090 in your web browser to access Prometheus where x.x.x.x is the IP returned by the command: $ minikube ip
  2. Open x.x.x.x:30030 in your web browser to access the Grafana dashboard where x.x.x.x is the IP returned by the command: $ minikube ip.
  3. The default credentials for logging into Grafana are admin:admin
  4. Click on `Add your first data source' and select Prometheus under the ‘Time series databases category’
  5. Set an appropriate name for the data source and enter localhost:9090 as the URL New Datasource
  6. Click on ‘Save & Test’. If everything has been installed correctly you should get a notification saying ‘Data source is working’ Datasource add success
  7. Click on the ‘+’ icon in the left handside column and select ‘Dashboard’
  8. Click on ‘+ Add new panel’
  9. Enter a PromQL query. For example if you want to add a graph showing the total number of responses over time enter trafficserver_responses_total and press Shift + Enter. New Graph
  10. Click on Apply to add the graph to your dashboard. You can similarly make add more graphs to your dashboard to suit your needs. To learn more about Grafana click here

Helm Chart

Helm Chart is provided. You can delete the namespace of trafficserver-test and monitoring-layer created above and continue the tutorial by following instructions here. The curl commands here will continue to work.