commit | bc1b6e12376d3b9d484cf7d49f5918207599fedb | [log] [tgz] |
---|---|---|
author | Domino Valdano <dvaldano@vmware.com> | Wed Sep 30 19:47:51 2020 -0700 |
committer | Domino Valdano <dvaldano@vmware.com> | Tue Jan 19 16:18:22 2021 -0800 |
tree | 6ebe4f498481b31d4de0bba9cbb97fb6b9251163 | |
parent | da9fce524be12d8a4a68e7314bb7471449ff957c [diff] |
DL: Major Refactor of Model Hopper JIRA: MADLIB-1428 - Use only 2 temporary tables (model_input_tbl & model_output_tbl) for moving the model weights around during hopping and training, instead of 3 (mst_weights_tbl, weights_to_update_tbl, and model_output_table) This elmiminates the UPDATE step, leaving only HOP and UDF steps - Add dist_key column to model_output table and DISTRIBUTE BY this instead of mst_key. This removes Redistribute Motion from UDF query plan, so that weights only ever move during the hop query, not during the training query. - Simplified schedule rotation: schedule table created only once, then gets rotated on segments, instead of re-creating many times by transfering data back and forth from master to segments to master each hop. No longer need separate "current_schedule" and "grand_schedule" data structures. - Skip first hop of each iteration (just rename model_output to model_input instead) - Split get_model_arch_and_weights() into query_weights() and get_model_arch() So we don't have to transfer weights from segment to master in places where we only need the model_arch json. - Much faster initialization code: previously, we were reading the weights in from the original model output table (during warm start) and the model arch table (for transfer learning) one mst row at a time from segment to master, then writing them each back out one row at a time from master back to segments with a large number of SELECT and INSERT queries. Now, we just use a single query to copy the weights directly from the original model output table into the new model output table on the segments, without ever sending them to master. And a similar single query copies the transfer learning weights directly from model_arch to model_output for training. Both of these happen in parallel on the segments, instead of in sequence on master. During testing on a 20-segment cluster with 20 models, this resulted in a 10x reduction in initialization time (26s instead of 5 mins) - Add some debugging that can be enabled to help profile the performance of fit multiple, and track which segment each mst_key is located during each hop. This also serves as an example for the utils/debug PR this is rebased on top of. - Add "unit" tests for fit mult model hopping code (implemented as dev-check tests so they can access the db) - Send Traceback of stack from segment back to coordinator - Cache plans for Hop & UDF queries
MADlib® is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data.
See the project website MADlib Home for links to the latest binary and source packages.
We appreciate all forms of project contributions to MADlib including bug reports, providing help to new users, documentation, or code patches. Please refer to Contribution Guidelines for instructions.
For more installation and contribution guides, please refer to the MADlib Wiki.
Compiling from source on Linux details are also on the wiki.
We provide a Docker image with necessary dependencies required to compile and test MADlib on PostgreSQL 10.5. You can view the dependency Docker file at ./tool/docker/base/Dockerfile_ubuntu16_postgres10. The image is hosted on Docker Hub at madlib/postgres_10:latest. Later we will provide a similar Docker image for Greenplum Database.
We provide a script to quickly run this docker image at ./tool/docker_start.sh, which will mount your local madlib directory, build MADlib and run install check on this Docker image. At the end, it will docker exec
as postgres user. Note that you have to run this script from inside your madlib directory, and you can specify your docker CONTAINER_NAME (default is madlib) and IMAGE_TAG (default is latest). Here is an example:
CONTAINER_NAME=my_madlib IMAGE_TAG=LaTex ./tool/docker_start.sh
Notice that this script only needs to be run once. After that, you will have a local docker container with CONTAINER_NAME running. To get access to the container, run the following command and you can keep working on it.
docker exec -it CONTAINER_NAME bash
To kill this docker container, run:
docker kill CONTAINER_NAME docker rm CONTAINER_NAME
You can also manually run those commands to do the same thing:
## 1) Pull down the `madlib/postgres_10:latest` image from docker hub: docker pull madlib/postgres_10:latest ## 2) Launch a container corresponding to the MADlib image, name it ## madlib, mounting the source code folder to the container: docker run -d -it --name madlib \ -v (path to madlib directory):/madlib/ madlib/postgres_10 # where madlib is the directory where the MADlib source code resides. ################################# * WARNING * ################################# # Please be aware that when mounting a volume as shown above, any changes you # make in the "madlib" folder inside the Docker container will be # reflected on your local disk (and vice versa). This means that deleting data # in the mounted volume from a Docker container will delete the data from your # local disk also. ############################################################################### ## 3) When the container is up, connect to it and build MADlib: docker exec -it madlib bash mkdir /madlib/build_docker cd /madlib/build_docker cmake .. make make doc make install ## 4) Install MADlib: src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install ## 5) Several other commands can now be run, such as: # Run install check, on all modules: src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install-check # Run install check, on a specific module, say svm: src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres install-check -t svm # Reinstall MADlib: src/bin/madpack -p postgres -c postgres/postgres@localhost:5432/postgres reinstall ## 6) Kill and remove containers (after exiting the container): docker kill madlib docker rm madlib
Instruction for building design pdf on Docker:
For users who wants to build design pdf, make sure you use the IMAGE_TAG=LaTex
parameter when running the script. After launching your docker container, run the following to get design.pdf
:
cd /madlib/build_docker make design_pdf cd doc/design
Detailed build instructions are available in ReadMe_Build.txt
The latest documentation of MADlib modules can be found at MADlib Docs
.
The following block-diagram gives a high-level overview of MADlib's architecture.
MADlib incorporates software from the following third-party components. Bundled with source code:
libstemmer
“small string processing language”m_widen_init
“allows compilation with recent versions of gcc with runtime dependencies from earlier versions of libstdc++”argparse 1.2.1
“provides an easy, declarative interface for creating command line tools”PyYAML 3.10
“YAML parser and emitter for Python”UseLATEX.cmake
“CMAKE commands to use the LaTeX compiler”Downloaded at build time (or supplied as build dependencies):
Boost 1.61.0 (or newer)
“provides peer-reviewed portable C++ source libraries”PyXB 1.2.6
“Python library for XML Schema Bindings”Eigen 3.2.2
“C++ template library for linear algebra”Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE
file distributed with this work for additional information regarding copyright ownership. The ASF licenses this project to You under the Apache License, Version 2.0 (the “License”); you may not use this project except in compliance with the License. You may obtain a copy of the License at LICENSE
.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
As specified in LICENSE
additional license information regarding included third-party libraries can be found inside the licenses
directory.
Changes between MADlib versions are described in the ReleaseNotes.txt
file.
MAD Skills : New Analysis Practices for Big Data (VLDB 2009)
Hybrid In-Database Inference for Declarative Information Extraction (SIGMOD 2011)
Towards a Unified Architecture for In-Database Analytics (SIGMOD 2012)
The MADlib Analytics Library or MAD Skills, the SQL (VLDB 2012)