tag | fd81c70ca45088d305cae41b1bab08656ef30538 | |
---|---|---|
tagger | spectrometerHBH <spectrometerh@gmail.com> | Sun Jul 02 10:41:37 2023 -0700 |
object | 683dfb0c04d9f2296940e89c60c2277aca095ccd |
tag v0.14.dev0
commit | 683dfb0c04d9f2296940e89c60c2277aca095ccd | [log] [tgz] |
---|---|---|
author | Qiang Zhang <johnson9009@163.com> | Sun Jul 02 12:06:38 2023 +0800 |
committer | GitHub <noreply@github.com> | Sat Jul 01 21:06:38 2023 -0700 |
tree | 2655b28fea66b1213857f6589e536017d11142e8 | |
parent | 977b4b2e05527aa8f1d48131374966e780ea8017 [diff] |
[RPC] Report RPC Session Timeout to Client Instead of "kShutdown" (#15187) By using RPC server in NPU board, at some time a compiled model will hang the NPU, because of the buggy operator libraries of NPU toolchain, so we must to use the session_timeout to ensure the board resource can be released by the hang jobs. Currently the handling of session timeout error in RPC server is not good, it just kill the server loop sub process, then in the destructor of class `RPCEndpoint` will send the code of `kShutdown` to the RPC client, but the RPC client expect receive the code of `kReturn` or `kException`, so users will see the error message that like the one reported in https://github.com/apache/tvm/issues/15151, this error report will make users very confused and don't know what's happened. When using tuning to search a good schedule for operators, we only want to ignore the RPC session timeout error that indicate the schedule generated is an illegal one, but other error reported by the RPC server may help us find the potential bug of our tool chain built on top of TVM, so the RPC session timeout error should be split to a standalone TVM error class. This PR implemented these requirements by sending the RPC session timeout error message as a PRC server exception to the RPC client before kill the server loop sub process.
Documentation | Contributors | Community | Release Notes
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.
TVM is licensed under the Apache-2.0 license.
Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.
We learned a lot from the following projects when building TVM.