commit | f8f0b8a54e2b320dc91a244a3ce0290d64fd501a | [log] [tgz] |
---|---|---|
author | Naveen Swamy <mn.naveen@gmail.com> | Thu Jan 25 23:51:59 2018 -0800 |
committer | Yizhi Liu <liuyizhi@apache.org> | Thu Jan 25 23:51:59 2018 -0800 |
tree | 906542d67850386278fb45fc1c752efced6a7ffb | |
parent | 77c50791d4ee87544b04d8517941b437d8231f2f [diff] |
Remove finalizers from Scala API (#8887) (#9568) * [scala] Remove finalizers for leakable resources Finalizers always run in a separate thread which is not controlled by the user. Since MXNet cannot be safely accessed from multiple threads, this causes memory corruption. It is not safe to use finalizers in this way. This change also adds some leaked object tracing, to allow leaks to be tracked. It's easy to leak objects and despite warnings in the documentation leaks are common (even within the Scala API itself). The leak tracing is activated by a system property, as its runtime cost may be significant. With the system property unset, the *first* leak of a type will be reported (without a trace) as a prompt to the developer to investigate. Co-authored-by: Andre Tamm <tammtamm@amazon.com> * [scala] Fix various resource leaks These leaks were diagnosed with the leak detection added in the previous commit. This is not an exhaustive clean up but it allows predicting with a model from Scala at scale (hundreds of millions of comparisons) without a reported leak, as well as removing the most common errors when training using the Module code. Co-authored-by: Andre Tamm <tammtamm@amazon.com>
Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.
MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.
© Contributors, 2015-2017. Licensed under an Apache-2.0 license.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015
MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.