tree: fc9c9691a808e07722e83204cb476c34d441747d [path history] [tgz]
  1. install/
  2. python/
  3. utils/
  4. bash.sh
  5. build.sh
  6. clear-stale-images.sh
  7. dev_common.sh
  8. Dockerfile.ci_adreno
  9. Dockerfile.ci_arm
  10. Dockerfile.ci_cortexm
  11. Dockerfile.ci_cpu
  12. Dockerfile.ci_gpu
  13. Dockerfile.ci_hexagon
  14. Dockerfile.ci_i386
  15. Dockerfile.ci_jekyll
  16. Dockerfile.ci_lint
  17. Dockerfile.ci_minimal
  18. Dockerfile.ci_riscv
  19. Dockerfile.ci_wasm
  20. Dockerfile.conda_cpu
  21. Dockerfile.conda_cuda100
  22. Dockerfile.conda_cuda90
  23. Dockerfile.demo_android
  24. Dockerfile.demo_cpu
  25. Dockerfile.demo_gpu
  26. Dockerfile.demo_opencl
  27. Dockerfile.demo_rocm
  28. Dockerfile.demo_vitis_ai
  29. Dockerfile.docs
  30. lint.sh
  31. README.md
  32. with_the_same_user
docker/README.md

TVM Docker

This directory contains the TVM's docker infrastructure. We use docker to provide build environments for CI and images for demo. We need docker and nvidia-docker for GPU images.

Start Docker Bash Session

You can use the following helper script to start an interactive bash session with a given image_name.

/path/to/tvm/docker/bash.sh image_name

The script does the following things:

  • Mount current directory to the same location in the docker container, and set it as home
  • Switch user to be the same user that calls the bash.sh
  • Use the host-side network

The helper bash script can be useful to build demo sessions.

Prebuilt Docker Images

You can use third party pre-built images for doing quick exploration with TVM installed. For example, you can run the following command to launch tvmai/demo-cpu image.

/path/to/tvm/docker/bash.sh tvmai/demo-cpu

Then inside the docker container, you can type the following command to start the jupyter notebook

jupyter notebook

You can find some un-official prebuilt images in https://hub.docker.com/r/tlcpack/ . Note that these are convenience images and are not part of the ASF release.

Use Local Build Script

We also provide script to build docker images locally. We use (build.sh)[./build.sh] to build and (optionally) run commands in the container. To build and run docker images, we can run the following command at the root of the project.

./docker/build.sh image_name [command(optional)]

Here image_name corresponds to the docker defined in the Dockerfile.image_name.

You can also start an interactive session by typing

./docker/build.sh image_name -it bash

The built docker images are prefixed by tvm., for example the command

./docker/build.sh image_name

produces the image tvm.ci_cpu that is displayed in the list of docker images using the command docker images. To run an interactive terminal, execute:

./docker/bash.sh tvm.ci_cpu

or

./docker/bash.sh tvm.ci_cpu echo hello tvm world

the same applies to the other images (``./docker/Dockerfile.*```).

The command ./docker/build.sh image_name COMMANDS is almost equivelant to ./docker/bash.sh image_name COMMANDS but in the case of bash.sh a build attempt is not done.

The build command will map the tvm root to the corresponding location inside the container with the same user as the user invoking the docker command. Here are some common use examples to perform CI tasks.

  • lint the python codes

    ./docker/build.sh tvm.ci_lint make pylint
    
  • build codes with CUDA support

    ./docker/build.sh ci_gpu make -j$(nproc)
    
  • do the python unittest

    ./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh
    
  • build the documents. The results will be available at docs/_build/html

    ./docker/ci_build.sh ci_gpu make -C docs html
    
  • build golang test suite.

    ./docker/build.sh ci_cpu tests/scripts/task_golang.sh