spark fair scheduling, asynchronous job in apii (jobs/), multiple api end-points (...Spark for legacy, or algorithm/...)

create handler manager to have multiple endpoint for the same algorithm

implement a demo asynchronous mode in the restapi

remove pydataclasses dependency
17 files changed
tree: 5cf274fca85d55ae5c87ac7241fd10d09d578e8b
  1. .gitignore
  5. analysis/
  6. climatology/
  7. data-access/
  8. docker/
  9. docs/
  10. helm/
  11. integrations/
  12. nexus-ingest/
  13. tools/


The next generation cloud-based science data service platform. More information can be found here

Building the Docs

Ensure sphinx, sphinx-autobuild, and recommonmark are installed. We use the recommonmark module for parsing Markdown files.

pip install sphinx sphinx-autobuild recommonmark

Run sphinx-autobuild to view the docs locally.

cd docs
sphinx-autobuild . _build/html