tree: 5ccac9f5090c09a7af136971498e07fb57e9b67b [path history] [tgz]
  1. install/
  2. bash.sh
  3. build.sh
  4. dev_common.sh
  5. Dockerfile.ci_arm
  6. Dockerfile.ci_cpu
  7. Dockerfile.ci_gpu
  8. Dockerfile.ci_i386
  9. Dockerfile.ci_jekyll
  10. Dockerfile.ci_lint
  11. Dockerfile.ci_qemu
  12. Dockerfile.ci_wasm
  13. Dockerfile.conda_cpu
  14. Dockerfile.conda_cuda100
  15. Dockerfile.conda_cuda90
  16. Dockerfile.demo_android
  17. Dockerfile.demo_cpu
  18. Dockerfile.demo_gpu
  19. Dockerfile.demo_opencl
  20. Dockerfile.demo_rocm
  21. Dockerfile.demo_vitis_ai
  22. lint.sh
  23. README.md
  24. with_the_same_user
docker/README.md

TVM Docker

This directory contains the TVM's docker infrastructure. We use docker to provide build environments for CI and images for demo. We need docker and nvidia-docker for GPU images.

Start Docker Bash Session

You can use the following helper script to start an interactive bash session with a given image_name.

/path/to/tvm/docker/bash.sh image_name

The script does the following things:

  • Mount current directory to /workspace and set it as home
  • Switch user to be the same user that calls the bash.sh
  • Use the host-side network

The helper bash script can be useful to build demo sessions.

Prebuilt Docker Images

You can use third party pre-built images for doing quick exploration with TVM installed. For example, you can run the following command to launch tvmai/demo-cpu image.

/path/to/tvm/docker/bash.sh tvmai/demo-cpu

Then inside the docker container, you can type the following command to start the jupyter notebook

jupyter notebook

You can find some un-official prebuilt images in https://hub.docker.com/r/tlcpack/ . Note that these are convenience images and are not part of the ASF release.

Use Local Build Script

We also provide script to build docker images locally. We use (build.sh)[./build.sh] to build and (optionally) run commands in the container. To build and run docker images, we can run the following command at the root of the project.

./docker/build.sh image_name [command(optional)]

Here image_name corresponds to the docker defined in the Dockerfile.image_name.

You can also start an interactive session by typing

./docker/build.sh image_name -it bash

The built docker images are prefixed by tvm., for example the command

./docker/build.sh image_name 

produces the image tvm.ci_cpu that is displayed in the list of docker images using the command docker images. To run an interactive terminal, execute:

./docker/bash.sh tvm.ci_cpu

or

./docker/bash.sh tvm.ci_cpu echo hello tvm world

the same applies to the other images (``./docker/Dockerfile.*```).

The command ./docker/build.sh image_name COMMANDS is almost equivelant to ./docker/bash.sh image_name COMMANDS but in the case of bash.sh a build attempt is not done.

The build command will map the tvm root to /workspace/ inside the container with the same user as the user invoking the docker command. Here are some common use examples to perform CI tasks.

  • lint the python codes

    ./docker/build.sh tvm.ci_lint make pylint
    
  • build codes with CUDA support

    ./docker/build.sh ci_gpu make -j$(nproc)
    
  • do the python unittest

    ./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh
    
  • build the documents. The results will be available at docs/_build/html

    ./docker/ci_build.sh ci_gpu make -C docs html
    
  • build golang test suite.

    ./docker/build.sh ci_cpu tests/scripts/task_golang.sh