tree: a80e88debd74c143476728dd406774c73ee24569 [path history] [tgz]
  1. cache/
  2. dns/
  3. edge/
  4. enroller/
  5. mid/
  6. optional/
  7. origin/
  8. ort/
  9. traffic_monitor/
  10. traffic_ops/
  11. traffic_ops_data/
  12. traffic_ops_integration_test/
  13. traffic_portal/
  14. traffic_portal_integration_test/
  15. traffic_router/
  16. traffic_stats/
  17. traffic_vault/
  18. .gitignore
  19. docker-compose.expose-ports.yml
  20. docker-compose.traffic-ops-test.yml
  21. docker-compose.traffic-portal-test.yml
  22. docker-compose.yml
  23. Makefile
  24. README.md
  25. variables.env
infrastructure/cdn-in-a-box/README.md

CDN In a Box (containerized)

This is intended to simplify the process of creating a “CDN in a box”, easing the barrier to entry for newcomers as well as providing a way to spin up a minimal CDN for full system testing.

note

For a more in-depth discussion of the CDN in a Box system, please see the official documentation.

Setup

The containers run on Docker, and require Docker (tested v17.05.0-ce) and Docker Compose (tested v1.9.0) to build and run. On most ‘nix systems these can be installed via the distribution’s package manager under the names docker-ce and docker-compose, respectively (e.g. sudo yum install docker-ce).

Each container (except the origin) requires an .rpm file to install the Traffic Control component for which it is responsible. You can download these *.rpm files from an archive (e.g. under “Releases”), use the provided Makefile to generate them (simply type make while in the cdn-in-a-box directory) or create them yourself by using the pkg script at the root of the repository. If you choose the latter, copy the *.rpms without any version/architecture information to their respective component directories, such that their filenames are as follows:

  • edge/traffic_ops_ort.rpm
  • mid/traffic_ops_ort.rpm
  • traffic_monitor/traffic_monitor.rpm
  • traffic_ops/traffic_ops.rpm
  • traffic_portal/traffic_portal.rpm

Finally, run the test CDN using the command:

docker-compose up --build

Components

The following assumes that the default configuration provided in variables.env is used.

Once your CDN is running, you should see a cascade of output on your terminal. This is typically the output of the build, then setup, and finally logging infrastructure (assuming nothing goes wrong). You can now access the various components of the CDN on your local machine. For example, opening https://localhost should show you the default UI for interacting with the CDN - Traffic Portal.

Note: You will likely see a warning about an untrusted or invalid certificate for components that serve over HTTPS (Traffic Ops & Traffic Portal). If you are sure that you are looking at https://localhost:N for some integer N, these warnings may be safely ignored via the e.g. Add Exception button (possibly hidden behind e.g. Advanced Options).

Host Ports

By default, docker-compose.yml does not expose ports to the host. This allows the host to be running other services on those ports, as well as allowing multiple CDN-in-a-Boxes to run on the same host, without port conflicts.

To expose the ports of each service on the host, add the docker-compose.expose-ports.yml file. For example, docker-compose -f docker-compose.yml -f docker-compose.expose-ports.yml up.

Common Pitfalls

Everything's “waiting for Traffic Ops” forever and nothing seems to be working

If you scroll back through the output ( or use docker-compose logs trafficops-perl | grep "User defined signal 2" ) and see a line that says something like /run.sh: line 79: 118 User defined signal 2 $TO_DIR/local/bin/hypnotoad script/cdn then you‘ve hit a mysterious known error. We don’t know what this is or why it happens, but your best bet is to send up a quick prayer and restart the stack.

Traffic Monitor is stuck waiting for a valid Snapshot

Often times you must take a CDN Snapshot in order for a valid Snapshot to be generated. This can be done through Traffic Portal's “CDNs” view, clicking on the “CDN-in-a-Box” CDN, then pressing the camera button, and finally the “Perform Snapshot” button.

I'm seeing a failure to open a socket and/or set a socket option

Try disabling SELinux or setting it to ‘permissive’. SELinux hates letting containers bind to certain ports. You can also try re-labeling the docker executable if you feel comfortable.

Traffic Vault container exits with cp /usr/local/share/ca-certificates cp: missing destination

Bring all components down, remove the traffic_ops/ca directory, and delete the volumes with docker volume prune. This will force the regeneration of the certificates.