[BUGFIX] Fix MKLDNN BatchNorm with even number of channels (#19150) (#19299)

* Fix MKLDNN BatchNorm with even number of channels (#19150)

Even number of channels results in data reordering before batch
norm operation. Therefore, if BatchNorm data array is view of
another array and the data is stored in MKLDNN format, the data
needs to be converted to the default format.

* Add or updated test to verify Batchnorm odd & even number of channels

* Fix for Batchnorm odd & even chnls number context
3 files changed
tree: 1861eb52380445dcac889c02326452b7aaa55874
  1. .github/
  2. 3rdparty/
  3. amalgamation/
  4. benchmark/
  5. cd/
  6. ci/
  7. cmake/
  8. config/
  9. contrib/
  10. cpp-package/
  11. docker/
  12. docs/
  13. example/
  14. include/
  15. julia/
  16. make/
  17. matlab/
  18. perl-package/
  19. plugin/
  20. python/
  21. R-package/
  22. scala-package/
  23. setup-utils/
  24. src/
  25. tests/
  26. tools/
  27. .clang-tidy
  28. .codecov.yml
  29. .gitattributes
  30. .gitignore
  31. .gitmodules
  32. .mxnet_root
  33. .travis.yml
  34. appveyor.yml
  35. CMakeLists.txt
  36. CODE_OF_CONDUCT.md
  37. CODEOWNERS
  38. CONTRIBUTORS.md
  39. DISCLAIMER-WIP
  40. KEYS
  41. LICENSE
  42. Makefile
  43. mkldnn.mk
  44. MKLDNN_README.md
  45. NEWS.md
  46. NOTICE
  47. README.md
  48. readthedocs.yml
  49. SECURITY.md
  50. snap.python
  51. snapcraft.yaml
README.md

Apache MXNet (incubating) for Deep Learning

MasterDocsLicense
CentOS CPU Build Status CentOS GPU Build Status Clang Build Status
Edge Build Status Miscellaneous Build Status Sanity Build Status
Unix CPU Build Status Unix GPU Build Status Website Build Status
Windows CPU Build Status Windows GPU Build Status
Documentation StatusGitHub license

banner

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is more than a deep learning project. It is a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Ask Questions

How to Contribute

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, Scala, C++, Java, Clojure, R, Go, Javascript, Perl, Matlab, and Julia
  • Cloud-friendly and directly compatible with AWS S3, AWS Deep Learning AMI, AWS SageMaker, HDFS, and Azure

License

Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.