commit | af334de3f20df2fca8ebe93ac585a5f87332187b | [log] [tgz] |
---|---|---|
author | Won Wook SONG <wsong0512@gmail.com> | Thu Sep 01 12:42:46 2022 +0900 |
committer | Won Wook SONG <wsong0512@gmail.com> | Thu Sep 01 12:42:46 2022 +0900 |
tree | 6011e3d717c36606b5a774f60a3310b8920246f3 | |
parent | 772f0c99d6c0f075eea137a315166f4b3f8400c6 [diff] |
javadoc
A Data Processing System for Flexible Employment With Different Deployment Characteristics.
Details about Nemo and its development can be found in:
Please refer to the Contribution guideline to contribute to our project.
Run $ ./bin/install_nemo.sh
on the Nemo home directory. This script includes the actions described below.
export HADOOP_HOME=/path/to/hadoop-2.7.2 export YARN_HOME=$HADOOP_HOME export PATH=$PATH:$HADOOP_HOME/bin
On Ubuntu 14.04 LTS and its point releases:
$ sudo apt-get install protobuf-compiler
On Ubuntu 16.04 LTS and its point releases:
$ sudo add-apt-repository ppa:snuspl/protobuf-250 $ sudo apt update $ sudo apt install protobuf-compiler=2.5.0-9xenial1
On macOS:
$ wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.bz2 $ tar xvf protobuf-2.5.0.tar.bz2 $ pushd protobuf-2.5.0 $ ./configure CC=clang CXX=clang++ CXXFLAGS='-std=c++11 -stdlib=libc++ -O3 -g' LDFLAGS='-stdlib=libc++' LIBS="-lc++ -lc++abi" $ make -j 4 $ sudo make install $ popd
Or build from source:
$ ./configure
$ make
$ make check
$ sudo make install
To check for a successful installation of version 2.5.0, run $ protoc --version
$ mvn clean install -T 2C
$ mvn clean install -DskipITs -T 2C
Apache Nemo is an official runner of Apache Beam, and it can be executed from Beam, using NemoRunner, as well as directly from the Nemo project. The details of using NemoRunner from Beam is shown on the NemoRunner page of the Apache Beam website. Below describes how Beam applications can be run directly on Nemo.
-job_id
: ID of the Beam job-user_main
: Canonical name of the Beam application-user_args
: Arguments that the Beam application accepts-optimization_policy
: Canonical name of the optimization policy to apply to a job DAG in Nemo Compiler-deploy_mode
: yarn
is supported(default value is local
)## WordCount example from the Beam website (Count words from a document) $ ./bin/run_beam.sh \ -job_id beam_wordcount \ -optimization_policy org.apache.nemo.compiler.optimizer.policy.DefaultPolicy \ -user_main org.apache.nemo.examples.beam.BeamWordCount \ -user_args "--runner=NemoRunner --inputFile=`pwd`/examples/resources/inputs/test_input_wordcount --output=`pwd`/outputs/wordcount" $ less `pwd`/outputs/wordcount* ## MapReduce WordCount example (Count words from the Wikipedia dataset) $ ./bin/run_beam.sh \ -job_id mr_default \ -executor_json `pwd`/examples/resources/executors/beam_test_executor_resources.json \ -optimization_policy org.apache.nemo.compiler.optimizer.policy.DefaultPolicy \ -user_main org.apache.nemo.examples.beam.WordCount \ -user_args "`pwd`/examples/resources/inputs/test_input_wordcount `pwd`/outputs/wordcount" $ less `pwd`/outputs/wordcount* ## YARN cluster example $ ./bin/run_beam.sh \ -deploy_mode yarn \ -job_id mr_transient \ -executor_json `pwd`/examples/resources/executors/beam_test_executor_resources.json \ -user_main org.apache.nemo.examples.beam.WordCount \ -optimization_policy org.apache.nemo.compiler.optimizer.policy.TransientResourcePolicy \ -user_args "hdfs://v-m:9000/test_input_wordcount hdfs://v-m:9000/test_output_wordcount" ## NEXMark streaming Q0 (query0) example $ ./bin/run_nexmark.sh \ -job_id nexmark-Q0 \ -executor_json `pwd`/examples/resources/executors/beam_test_executor_resources.json \ -user_main org.apache.beam.sdk.nexmark.Main \ -optimization_policy org.apache.nemo.compiler.optimizer.policy.StreamingPolicy \ -scheduler_impl_class_name org.apache.nemo.runtime.master.scheduler.StreamingScheduler \ -user_args "--runner=NemoRunner --streaming=true --query=0 --numEventGenerators=1 --manageResources=false --monitorJobs=false"
-executor_json
command line option can be used to provide a path to the JSON file that describes resource configuration for executors. Its default value is config/default.json
, which initializes one of each Transient
, Reserved
, and Compute
executor, each of which has one core and 1024MB memory.
num
(optional): Number of containers. Default value is 1type
: Three container types are supported:Transient
: Containers that store eviction-prone resources. When batch jobs use idle resources in Transient
containers, they can be arbitrarily evicted when latency-critical jobs attempt to use the resources.Reserved
: Containers that store eviction-free resources. Reserved
containers are used to reliably store intermediate data which have high eviction cost.Compute
: Containers that are mainly used for computation.memory_mb
: Memory size in MBcapacity
: Number of Task
s that can be run in an executor. Set this value to be the same as the number of CPU cores of the container.[ { "num": 12, "type": "Transient", "memory_mb": 1024, "capacity": 4 }, { "type": "Reserved", "memory_mb": 1024, "capacity": 2 } ]
This example configuration specifies
Please refer to the instructions at web-ui/README.md
to run the frontend.
While Nemo driver is alive, it can post runtime metrics through websocket. At your frontend, add websocket endpoint
ws://<DRIVER>:10101/api/websocket
where <DRIVER>
is the hostname that Nemo driver runs.
OR, you can directly run the WebUI on the driver using bin/run_webserver.sh
, where it looks for the websocket on its local machine, which, by default, provides the address at
http://<DRIVER>:3333
On job completion, the Nemo driver creates metric.json
at the directory specified by -dag_dir
option. At your frontend, add the JSON file to do post-job analysis.
Other JSON files are for legacy Web UI, hosted here. It uses Graphviz to visualize IR DAGs and execution plans.
$ ./bin/run_beam.sh \ -job_id als \ -executor_json `pwd`/examples/resources/executors/beam_test_executor_resources.json \ -user_main org.apache.nemo.examples.beam.AlternatingLeastSquare \ -optimization_policy org.apache.nemo.compiler.optimizer.policy.TransientResourcePolicy \ -dag_dir "./dag/als" \ -user_args "`pwd`/examples/resources/inputs/test_input_als 10 3"
-db_enabled
: Whether or not to turn on the DB (true
or false
).-db_address
: Address of the DB. (ex. PostgreSQL DB starts with jdbc:postgresql://...
)-db_id
: ID of the DB from the given address.-db_password
: Credentials for the DB from the given address.