Merge branch 'CLIMATE-825'
diff --git a/.travis.yml b/.travis.yml
index dca9987..d8acac9 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -24,6 +24,7 @@
- source easy-ocw/install-ubuntu.sh
- pip install python-coveralls
script:
- - nosetests --with-coverage --cover-package=ocw --nocapture -v
+ - chmod a+x test.sh
+ - ./test.sh
after_script:
- coveralls
diff --git a/CHANGES.txt b/CHANGES.txt
index a6d86e1..0f4aab2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,173 @@
Apache Open Climate Workbench Change Log
============================================
+Release Notes - Apache Open Climate Workbench - Version 1.1.0
+Release Report - https://s.apache.org/110report
+
+Sub-task
+
+ [CLIMATE-709] - Drop Freetype from conda environment file
+ [CLIMATE-712] - Update VM init script
+ [CLIMATE-713] - Update VM wiki page with new instructions
+ [CLIMATE-757] - Fix ERROR: test suite for <class 'ocw.tests.test_dap.TestDap'>
+ [CLIMATE-758] - Fix failing tests in test_local.py
+ [CLIMATE-759] - Fix failing tests in test_dataset_processor.py
+ [CLIMATE-795] - Add tests for dataset_processor module
+ [CLIMATE-801] - Add tests for dataset module
+ [CLIMATE-802] - Add tests for evaluation module
+ [CLIMATE-805] - Add tests for utils module
+ [CLIMATE-806] - Add tests for local module
+ [CLIMATE-808] - Add tests for plotter module
+ [CLIMATE-812] - Fix PEP8 Violations in dataset processor
+ [CLIMATE-813] - Fix PEP8 Violations in utils
+ [CLIMATE-814] - Fix PEP8 Violations in dataset
+
+Bug
+
+ [CLIMATE-407] - Dataset select can queue empty RCMED datasets
+ [CLIMATE-517] - Start and End day values overlap in dataset display
+ [CLIMATE-556] - easy_install scripts should create a new virtualenv e.g. -e flag, by default
+ [CLIMATE-557] - Investigate dubious GEOS installation output
+ [CLIMATE-560] - Incorrect assumption for installation directory within easy-ocw/install-osx.sh
+ [CLIMATE-561] - easy-ocw/install-osx.sh script should not assume relative locations for dependency utilitiy files
+ [CLIMATE-562] - Make nightly documentation SNAPSHOT's available for public consumption
+ [CLIMATE-616] - missing dependencies
+ [CLIMATE-634] - fix calc_climatology_monthly
+ [CLIMATE-642] - Improper unit set in water_flux_unit_conversion
+ [CLIMATE-668] - ocw.dataset_spatial_resolution needs to be fixed
+ [CLIMATE-669] - OCW spatial_boundary bug
+ [CLIMATE-671] - Inappropriate spatial subset for datasets on curvilinear grids
+ [CLIMATE-698] - Handling missing values in ocw.dataset_processor.temporal_rebin_with_time_index
+ [CLIMATE-707] - Rework Easy OCW scripts to only use conda
+ [CLIMATE-710] - OCW Package Tests are Broken
+ [CLIMATE-711] - conda-install won't source .bashrc properly
+ [CLIMATE-716] - Dataset object manipulation when plotting a Taylor diagram using a configuration file
+ [CLIMATE-718] - Temporal slice of two-dimensional model output
+ [CLIMATE-719] - Subsetting model data on curvillinear grids in the configuration file runner
+ [CLIMATE-722] - Extrapolation issues in spatial_regrid
+ [CLIMATE-724] - Debugging load_dataset_from_multiple_netcdf_files
+ [CLIMATE-725] - Ensure that OCW 1.1 Test PyPi Works as Expected
+ [CLIMATE-737] - Debugging dataset_processor.temporal_rebin
+ [CLIMATE-738] - Google jsapi Uses HTTP Rather Than HTTPS
+ [CLIMATE-739] - main.html refers to missing img/globe.png image
+ [CLIMATE-740] - Add img Path Handler To run_webservices.py
+ [CLIMATE-742] - ocw.data_source.local.py cannot properly detect the altitude dimension
+ [CLIMATE-743] - Update utils.normalize_lat_lon_values
+ [CLIMATE-745] - Report an Issue Link and @copyright year are incorrect in ocw-ui
+ [CLIMATE-748] - Debugging ocw.dataset_processor
+ [CLIMATE-749] - Changing temporal_resolution key in CLI
+ [CLIMATE-750] - $scope.fileLoadFailed Compares On Boolean Rather Than String
+ [CLIMATE-752] - Converting the unit of CRU cloud fraction by adding an option to configuration files
+ [CLIMATE-761] - Error in easy-ocw install-ubuntu Basemap stage
+ [CLIMATE-764] - Screen size of CLI
+ [CLIMATE-771] - Critical bugs in LAT_NAMES and LON_NAMES in local.py
+ [CLIMATE-781] - Fix the ESGF example in run_RCMES.py
+ [CLIMATE-786] - Update rcmed.py and test_rcmed.py
+ [CLIMATE-799] - TypeError in subset function
+ [CLIMATE-800] - TypeError in _rcmes_calc_average_on_new_time_unit
+ [CLIMATE-804] - normalize_lat_lon_values does not work
+ [CLIMATE-807] - Add test coverage badge to README
+ [CLIMATE-809] - Fix coveragerc file
+ [CLIMATE-818] - local.load_dataset_from_multiple_netcdf_files() does not accept user entered lon_name and lat_name fields.
+ [CLIMATE-822] - ValueError in RCMES test
+
+Improvement
+
+ [CLIMATE-379] - Allow dataset name customization in UI
+ [CLIMATE-409] - Implement a language sensitive map (I18n) for WebApp
+ [CLIMATE-421] - Add a download page for OCW
+ [CLIMATE-539] - Get OCW on to PyPI
+ [CLIMATE-569] - Updating rcmes.py using the latest OCW library
+ [CLIMATE-572] - Address deprecation and WARN's in ocw-ui/frontend npm install
+ [CLIMATE-573] - Remove sudo requirement to install virtualenv within install-ubuntu.sh
+ [CLIMATE-617] - Documentation Audit
+ [CLIMATE-632] - Adding a loader to handle multiple MERRA reanalysis HDF files stored on a local disk
+ [CLIMATE-635] - Add documentation to dev guide regarding test running
+ [CLIMATE-652] - Calculation of area weighted spatial average
+ [CLIMATE-653] - Netcdf file generator with subregion information
+ [CLIMATE-657] - Adding functions to calculate metrics
+ [CLIMATE-658] - Restructure evaluation results
+ [CLIMATE-666] - Replace examples with the RCMES script and yaml files
+ [CLIMATE-672] - Update the input validation function for OCW datasets
+ [CLIMATE-673] - Update the module to load multiple netcdf files
+ [CLIMATE-674] - Update the spatial_regrid module to handle data on curvilinear grids or irregularly spaced grids
+ [CLIMATE-676] - Cleaning up the examples
+ [CLIMATE-678] - Provide link to Python API documentation
+ [CLIMATE-680] - A new loader to read WRF precipitation data with a file list
+ [CLIMATE-681] - Update the loader to read WRF data (other than precipitation)
+ [CLIMATE-684] - Update README with Python API docs
+ [CLIMATE-700] - Complete examples to reproduce a RCMES-based paper
+ [CLIMATE-701] - Examples for evaluation of CORDEX-Arctic RCMs
+ [CLIMATE-702] - Print Jenkins test result to Github issue
+ [CLIMATE-703] - Remove pop-up windows from metrics_and_plots.py
+ [CLIMATE-704] - Sensitivity of spatial boundary check in dataset_processor
+ [CLIMATE-708] - Switch VM build over to conda environment approach
+ [CLIMATE-714] - Updating the regridding routine
+ [CLIMATE-720] - Revise file structure
+ [CLIMATE-723] - Update subset module for regional climate model output
+ [CLIMATE-726] - Update configuration files
+ [CLIMATE-727] - Ensure that ocwui package.json version is updated in line with releases
+ [CLIMATE-728] - Address WARN's when building ocwui
+ [CLIMATE-729] - Remove config file from NARCCAP examples
+ [CLIMATE-730] - Add OCW logo to ocw-ui header navigation panel
+ [CLIMATE-731] - Update ocw.dataset_processor.temperature_unit_conversion
+ [CLIMATE-734] - Adjust size of the color bars in the map plot of biases
+ [CLIMATE-735] - Update utils.decode_time_values
+ [CLIMATE-736] - Update dataset_processor.write_netcdf_multiple_datasets_with_subregions
+ [CLIMATE-741] - Adding configuration files to evaluate CORDEX-Africa regional climate models
+ [CLIMATE-754] - RCMED dataset parameters need to be more verbose
+ [CLIMATE-760] - Address documentation warnings
+ [CLIMATE-766] - Easy-ocw/install-ubuntu.sh script is broken
+ [CLIMATE-770] - Make boundary checking optional in spatial_regrid
+ [CLIMATE-777] - cli_app shows a list of model
+ [CLIMATE-778] - Cosmetic updates for the cli_app
+ [CLIMATE-779] - Add ESGF Integration into run_RCMES.py
+ [CLIMATE-780] - Add Travis-CI build status to README.md
+ [CLIMATE-783] - Update ESGF examples
+ [CLIMATE-811] - Add landscape.io integration
+ [CLIMATE-815] - Fix all PEP8 Violations in ocw module
+ [CLIMATE-816] - Add requires.io badge to README.md
+ [CLIMATE-817] - More informative error messages for data_source.load_file()
+ [CLIMATE-820] - Update pip requirements
+ [CLIMATE-821] - write_netcdf() assumes lat and lon are 1D arrays
+
+New Feature
+
+ [CLIMATE-246] - Develop PoweredBy Logo for OCW
+ [CLIMATE-367] - Add more 'new contributor' information
+ [CLIMATE-677] - Homebrew Formula for OCW
+ [CLIMATE-683] - A new loader to read multiple netCDF files with a file list and spatial mask
+ [CLIMATE-687] - A new loader to read GPM precipitation data with a file list
+ [CLIMATE-692] - A new loader to read NLDAS data with a file list
+ [CLIMATE-694] - A new module to rebin a dataset using time index
+ [CLIMATE-696] - Examples to evaluate CORDEX-ARCTIC RCMs
+ [CLIMATE-715] - Adding a new demo tab along with the evaluate and result so that user can see the demo of the ocw in this tab.
+ [CLIMATE-732] - Update dataset_processor.temporal_rebin
+ [CLIMATE-733] - Update run_RCMES.py
+ [CLIMATE-747] - Adding configuration files as an example of NASA's downscaling project
+ [CLIMATE-829] - Add conda package recipes
+
+Story
+
+ [CLIMATE-418] - Remove hard links to mailing lists
+
+Task
+
+ [CLIMATE-611] - SSL certificate verify error
+ [CLIMATE-659] - Remove SpatialMeanOfTemporalMeanBias
+ [CLIMATE-695] - Adding h5py library
+ [CLIMATE-782] - Resolve BeautifulSoup warnings in esgf data_source and add myproxyclient to easy-ocw install
+ [CLIMATE-831] - Add License Headers to conda recipes
+
+Test
+
+ [CLIMATE-679] - Statistical downscaling examples
+
+Wish
+
+ [CLIMATE-690] - Data Sources Class for NSIDC's Arctic Data Explorer Platform
+ [CLIMATE-691] - Provide link to RCMED Query Service Documentation from within RCMED data source Python docs
+
Release Notes - Apache Open Climate Workbench - Version 1.0.0
** Sub-task
diff --git a/RCMES/cli_app.py b/RCMES/cli_app.py
index be46de8..9894a35 100644
--- a/RCMES/cli_app.py
+++ b/RCMES/cli_app.py
@@ -457,8 +457,8 @@
'database':"{0}".format(netCDF_path),
'dataset_id':"esgf".format(esgf_variable),
'parameter_id':"{0}".format(esgf_variable),
- 'start_date': obs_dataset.time_range()[0].strftime("%Y-%m-%d"),
- 'end_date':obs_dataset.time_range()[1].strftime("%Y-%m-%d"),
+ 'start_date': obs_dataset.temporal_boundaries()[0].strftime("%Y-%m-%d"),
+ 'end_date':obs_dataset.temporal_boundaries()[1].strftime("%Y-%m-%d"),
#'bounding_box':obs['bounding_box'],
'timestep':"monthly",
'min_lat':obs_dataset.spatial_boundaries()[0],
@@ -646,7 +646,8 @@
if each_target_dataset.lats.ndim !=2 and each_target_dataset.lons.ndim !=2:
new_model_datasets[member] = dsp.subset(EVAL_BOUNDS, new_model_datasets[member])
else:
- new_model_datasets[member] = dsp.temporal_slice(EVAL_BOUNDS.start, EVAL_BOUNDS.end, each_target_dataset)
+ new_model_datasets[member] = dsp.temporal_slice(
+ each_target_dataset, EVAL_BOUNDS.start, EVAL_BOUNDS.end)
screen.addstr(5, 4, "--> Temporally regridded.")
screen.refresh()
@@ -798,8 +799,8 @@
models_start_time = []
models_end_time = []
for model in model_datasets:
- models_start_time.append(model.time_range()[0])
- models_end_time.append(model.time_range()[1])
+ models_start_time.append(model.temporal_boundaries()[0])
+ models_end_time.append(model.temporal_boundaries()[1])
return models_start_time, models_end_time
diff --git a/RCMES/run_RCMES.py b/RCMES/run_RCMES.py
index ed48458..cd69bc4 100644
--- a/RCMES/run_RCMES.py
+++ b/RCMES/run_RCMES.py
@@ -146,11 +146,11 @@
max_lon = np.min([max_lon, ref_dataset.lons.max()])
bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
-ref_dataset = dsp.subset(bounds,ref_dataset)
+ref_dataset = dsp.subset(ref_dataset, bounds)
if ref_dataset.temporal_resolution() != temporal_resolution:
ref_dataset = dsp.temporal_rebin(ref_dataset, temporal_resolution)
for idata,dataset in enumerate(model_datasets):
- model_datasets[idata] = dsp.subset(bounds,dataset)
+ model_datasets[idata] = dsp.subset(dataset, bounds)
if dataset.temporal_resolution() != temporal_resolution:
model_datasets[idata] = dsp.temporal_rebin(dataset, temporal_resolution)
@@ -159,9 +159,9 @@
month_end = time_info['month_end']
average_each_year = time_info['average_each_year']
-ref_dataset = dsp.temporal_subset(month_start, month_end,ref_dataset,average_each_year)
+ref_dataset = dsp.temporal_subset(ref_dataset,month_start, month_end,average_each_year)
for idata,dataset in enumerate(model_datasets):
- model_datasets[idata] = dsp.temporal_subset(month_start, month_end,dataset,average_each_year)
+ model_datasets[idata] = dsp.temporal_subset(dataset,month_start, month_end,average_each_year)
# generate grid points for regridding
if config['regrid']['regrid_on_reference']:
diff --git a/RCMES/statistical_downscaling/run_statistical_downscaling.py b/RCMES/statistical_downscaling/run_statistical_downscaling.py
index 60c6ac2..9aae618 100644
--- a/RCMES/statistical_downscaling/run_statistical_downscaling.py
+++ b/RCMES/statistical_downscaling/run_statistical_downscaling.py
@@ -132,9 +132,9 @@
""" Step 2: Temporal subsetting """
print("Temporal subsetting for the selected month(s)")
-ref_temporal_subset = dsp.temporal_subset(month_start, month_end, ref_dataset)
-model_temporal_subset_present = dsp.temporal_subset(month_start, month_end, model_dataset_present)
-model_temporal_subset_future = dsp.temporal_subset(month_start, month_end, model_dataset_future)
+ref_temporal_subset = dsp.temporal_subset(ref_dataset, month_start, month_end)
+model_temporal_subset_present = dsp.temporal_subset(model_dataset_present, month_start, month_end)
+model_temporal_subset_future = dsp.temporal_subset(model_dataset_future, month_start, month_end)
""" Step 3: Spatial aggregation of observational data into the model grid """
print("Spatial aggregation of observational data near latitude %0.2f and longitude %0.2f " % (grid_lat, grid_lon))
diff --git a/RCMES/test/test.py b/RCMES/test/test.py
index bbb8095..677a13f 100644
--- a/RCMES/test/test.py
+++ b/RCMES/test/test.py
@@ -83,7 +83,7 @@
cru_start = datetime.datetime.strptime(cru_31['start_date'], "%Y-%m-%d")
cru_end = datetime.datetime.strptime(cru_31['end_date'], "%Y-%m-%d")
-knmi_start, knmi_end = knmi_dataset.time_range()
+knmi_start, knmi_end = knmi_dataset.temporal_boundaries()
# Grab the Max Start Time
start_time = max([cru_start, knmi_start])
# Grab the Min End Time
@@ -112,7 +112,7 @@
# Create a Bounds object to use for subsetting
new_bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
-knmi_dataset = dsp.subset(new_bounds, knmi_dataset)
+knmi_dataset = dsp.subset(knmi_dataset, new_bounds)
print("CRU31_Dataset.values shape: (times, lats, lons) - %s" % (cru31_dataset.values.shape,))
print("KNMI_Dataset.values shape: (times, lats, lons) - %s \n" % (knmi_dataset.values.shape,))
diff --git a/conda_recipes/bottle/bld.bat b/conda_recipes/bottle/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/bottle/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/bottle/build.sh b/conda_recipes/bottle/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/bottle/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/bottle/meta.yaml b/conda_recipes/bottle/meta.yaml
new file mode 100644
index 0000000..57c296a
--- /dev/null
+++ b/conda_recipes/bottle/meta.yaml
@@ -0,0 +1,76 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: bottle
+ version: "0.12.9"
+
+source:
+ fn: bottle-0.12.9.tar.gz
+ url: https://files.pythonhosted.org/packages/d2/59/e61e3dc47ed47d34f9813be6d65462acaaba9c6c50ec863db74101fa8757/bottle-0.12.9.tar.gz
+ md5: f5850258a86224a791171e8ecbb66d99
+# patches:
+ # List any patch files here
+ # - fix.patch
+
+# build:
+ # noarch_python: True
+ # preserve_egg_dir: True
+ # entry_points:
+ # Put any entry points (scripts to be generated automatically) here. The
+ # syntax is module:function. For example
+ #
+ # - bottle = bottle:main
+ #
+ # Would create an entry point called bottle that calls bottle.main()
+
+
+ # If this is a new build for the same version, increment the build
+ # number. If you do not include this key, it defaults to 0.
+ # number: 1
+
+requirements:
+ build:
+ - python
+
+ run:
+ - python
+
+# test:
+ # Python imports
+ # imports:
+
+ # commands:
+ # You can put test commands to be run here. Use this to test that the
+ # entry points work.
+
+
+ # You can also put a file called run_test.py in the recipe that will be run
+ # at test time.
+
+ # requires:
+ # Put any additional test requirements here. For example
+ # - nose
+
+about:
+ home: http://bottlepy.org/
+ license: MIT License
+ summary: 'Fast and simple WSGI-framework for small web-applications.'
+
+# See
+# http://docs.continuum.io/conda/build.html for
+# more information about meta.yaml
diff --git a/conda_recipes/esgf-pyclient/bld.bat b/conda_recipes/esgf-pyclient/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/esgf-pyclient/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/esgf-pyclient/build.sh b/conda_recipes/esgf-pyclient/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/esgf-pyclient/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/esgf-pyclient/meta.yaml b/conda_recipes/esgf-pyclient/meta.yaml
new file mode 100644
index 0000000..f54acf6
--- /dev/null
+++ b/conda_recipes/esgf-pyclient/meta.yaml
@@ -0,0 +1,82 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: esgf-pyclient
+ version: "0.1.6"
+
+source:
+ fn: esgf-pyclient-0.1.6.tar.gz
+ url: https://files.pythonhosted.org/packages/2b/9d/16ffd2c1b2d30ee350e55a23168395659366903da05c413db24ee1b374e1/esgf-pyclient-0.1.6.tar.gz
+ md5: 228ac9d7e3a600f587c7cb46c7389cdc
+# patches:
+ # List any patch files here
+ # - fix.patch
+
+# build:
+ # noarch_python: True
+ # preserve_egg_dir: True
+ # entry_points:
+ # Put any entry points (scripts to be generated automatically) here. The
+ # syntax is module:function. For example
+ #
+ # - esgf-pyclient = esgf-pyclient:main
+ #
+ # Would create an entry point called esgf-pyclient that calls esgf-pyclient.main()
+
+
+ # If this is a new build for the same version, increment the build
+ # number. If you do not include this key, it defaults to 0.
+ # number: 1
+
+requirements:
+ build:
+ - python
+ - setuptools
+ - jinja2
+
+ run:
+ - python
+ - jinja2
+
+test:
+ # Python imports
+ imports:
+ - pyesgf
+ - pyesgf.search
+ - pyesgf.security
+
+ # commands:
+ # You can put test commands to be run here. Use this to test that the
+ # entry points work.
+
+
+ # You can also put a file called run_test.py in the recipe that will be run
+ # at test time.
+
+ # requires:
+ # Put any additional test requirements here. For example
+ # - nose
+
+about:
+ home: http://esgf-pyclient.readthedocs.org
+ license: BSD License
+ summary: 'A library interacting with ESGF services within Python'
+
+# See
+# http://docs.continuum.io/conda/build.html for
+# more information about meta.yaml
diff --git a/conda_recipes/myproxyclient/bld.bat b/conda_recipes/myproxyclient/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/myproxyclient/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/myproxyclient/build.sh b/conda_recipes/myproxyclient/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/myproxyclient/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/myproxyclient/meta.yaml b/conda_recipes/myproxyclient/meta.yaml
new file mode 100644
index 0000000..96ade88
--- /dev/null
+++ b/conda_recipes/myproxyclient/meta.yaml
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: myproxyclient
+ version: "1.4.3"
+
+source:
+ fn: MyProxyClient-1.4.3.tar.gz
+ url: https://files.pythonhosted.org/packages/87/40/c461e690422a1c6994630d5f73d74295037793eb5e41f346a7b55b6d5baa/MyProxyClient-1.4.3.tar.gz
+ md5: 15d1dccae2cde5d24cd8fb082972debc
+# patches:
+ # List any patch files here
+ # - fix.patch
+
+build:
+ # noarch_python: True
+ # preserve_egg_dir: True
+ entry_points:
+ # Put any entry points (scripts to be generated automatically) here. The
+ # syntax is module:function. For example
+ #
+ # - myproxyclient = myproxyclient:main
+ #
+ # Would create an entry point called myproxyclient that calls myproxyclient.main()
+
+ - myproxyclient = myproxy.script:main
+
+ # If this is a new build for the same version, increment the build
+ # number. If you do not include this key, it defaults to 0.
+ # number: 1
+
+requirements:
+ build:
+ - python
+ - setuptools
+ - pyopenssl
+
+ run:
+ - python
+ - pyopenssl
+
+test:
+ # Python imports
+ imports:
+ - myproxy
+ - myproxy.test
+ - myproxy.utils
+
+ commands:
+ # You can put test commands to be run here. Use this to test that the
+ # entry points work.
+
+ - myproxyclient --help
+
+ # You can also put a file called run_test.py in the recipe that will be run
+ # at test time.
+
+ # requires:
+ # Put any additional test requirements here. For example
+ # - nose
+
+about:
+ home: https://github.com/cedadev/MyProxyClient
+ license: GNU Library or Lesser General Public License (BSD)
+ summary: 'MyProxy Client'
+
+# See
+# http://docs.continuum.io/conda/build.html for
+# more information about meta.yaml
diff --git a/conda_recipes/ocw/bld.bat b/conda_recipes/ocw/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/ocw/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/ocw/build.sh b/conda_recipes/ocw/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/ocw/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/ocw/meta.yaml b/conda_recipes/ocw/meta.yaml
new file mode 100644
index 0000000..e70659d
--- /dev/null
+++ b/conda_recipes/ocw/meta.yaml
@@ -0,0 +1,60 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: ocw
+ version: 1.1.0
+
+source:
+ git_url: https://github.com/apache/climate.git
+ # git_rev: 1.1.0
+
+build:
+ number: 0
+
+requirements:
+ build:
+ - python
+ - setuptools
+
+ run:
+ - python
+ - numpy
+ - scipy
+ - matplotlib
+ - basemap
+ - netcdf4
+ - h5py
+ - bottle
+ - pydap
+ - python-dateutil
+ - mock
+ - myproxyclient
+ - webtest
+ - esgf-pyclient
+
+test:
+ # Python imports
+ imports:
+ - ocw
+ - ocw.data_source
+ - ocw.esgf
+
+about:
+ home: http://climate.apache.org/
+ license: Apache License
+ summary: 'A library for simplifying the process of climate model evaluation.'
diff --git a/conda_recipes/pydap/bld.bat b/conda_recipes/pydap/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/pydap/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/pydap/build.sh b/conda_recipes/pydap/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/pydap/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/pydap/meta.yaml b/conda_recipes/pydap/meta.yaml
new file mode 100644
index 0000000..8def1d3
--- /dev/null
+++ b/conda_recipes/pydap/meta.yaml
@@ -0,0 +1,81 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: pydap
+ version: "3.1.1"
+
+source:
+ fn: Pydap-3.1.1.tar.gz
+ url: https://files.pythonhosted.org/packages/cd/da/577a2b6721e9b103f1671bd020f5553e582f2c509fe5cade636822b35351/Pydap-3.1.1.tar.gz
+ md5: d13630328c121eeeb0e0f015eb9e7124
+# patches:
+ # List any patch files here
+ # - fix.patch
+
+build:
+ # noarch_python: True
+ preserve_egg_dir: True
+ # entry_points:
+ # Put any entry points (scripts to be generated automatically) here. The
+ # syntax is module:function. For example
+ #
+ # - pydap = pydap:main
+ #
+ # Would create an entry point called pydap that calls pydap.main()
+
+
+ # If this is a new build for the same version, increment the build
+ # number. If you do not include this key, it defaults to 0.
+ # number: 1
+
+requirements:
+ build:
+ - python
+ - setuptools
+
+ run:
+ - python
+ - setuptools
+
+test:
+ # Python imports
+ imports:
+ - paste
+ - paste.deploy
+
+ # commands:
+ # You can put test commands to be run here. Use this to test that the
+ # entry points work.
+
+
+ # You can also put a file called run_test.py in the recipe that will be run
+ # at test time.
+
+ requires:
+ - nose >=0.11
+ # Put any additional test requirements here. For example
+ # - nose
+
+about:
+ home: http://pythonpaste.org/deploy/
+ license: MIT License
+ summary: 'Load, configure, and compose WSGI applications and servers'
+
+# See
+# http://docs.continuum.io/conda/build.html for
+# more information about meta.yaml
diff --git a/conda_recipes/sphinxcontrib-httpdomain/bld.bat b/conda_recipes/sphinxcontrib-httpdomain/bld.bat
new file mode 100644
index 0000000..4f75e6a
--- /dev/null
+++ b/conda_recipes/sphinxcontrib-httpdomain/bld.bat
@@ -0,0 +1,25 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"%PYTHON%" setup.py install
+if errorlevel 1 exit 1
+
+:: Add more build steps here, if they are necessary.
+
+:: See
+:: http://docs.continuum.io/conda/build.html
+:: for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/sphinxcontrib-httpdomain/build.sh b/conda_recipes/sphinxcontrib-httpdomain/build.sh
new file mode 100644
index 0000000..c0530d1
--- /dev/null
+++ b/conda_recipes/sphinxcontrib-httpdomain/build.sh
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#!/bin/bash
+
+$PYTHON setup.py install
+
+# Add more build steps here, if they are necessary.
+
+# See
+# http://docs.continuum.io/conda/build.html
+# for a list of environment variables that are set during the build process.
diff --git a/conda_recipes/sphinxcontrib-httpdomain/meta.yaml b/conda_recipes/sphinxcontrib-httpdomain/meta.yaml
new file mode 100644
index 0000000..10c51f9
--- /dev/null
+++ b/conda_recipes/sphinxcontrib-httpdomain/meta.yaml
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+package:
+ name: sphinxcontrib-httpdomain
+ version: "1.5.0"
+
+source:
+ fn: sphinxcontrib-httpdomain-1.5.0.tar.gz
+ url: https://files.pythonhosted.org/packages/a5/52/0ded71896b9d25621b44d681cdd352c37a9ed81219a6b62014bd15dd2b9e/sphinxcontrib-httpdomain-1.5.0.tar.gz
+ md5: db069391a08c0f0bd6aa6819f5018337
+# patches:
+ # List any patch files here
+ # - fix.patch
+
+# build:
+ # noarch_python: True
+ # preserve_egg_dir: True
+ # entry_points:
+ # Put any entry points (scripts to be generated automatically) here. The
+ # syntax is module:function. For example
+ #
+ # - sphinxcontrib-httpdomain = sphinxcontrib-httpdomain:main
+ #
+ # Would create an entry point called sphinxcontrib-httpdomain that calls sphinxcontrib-httpdomain.main()
+
+
+ # If this is a new build for the same version, increment the build
+ # number. If you do not include this key, it defaults to 0.
+ # number: 1
+
+requirements:
+ build:
+ - python
+ - setuptools
+ - sphinx >=1.0
+ - six
+
+ run:
+ - python
+ - sphinx >=1.0
+ - six
+
+test:
+ # Python imports
+ imports:
+ - sphinxcontrib
+ - sphinxcontrib.autohttp
+
+ # commands:
+ # You can put test commands to be run here. Use this to test that the
+ # entry points work.
+
+
+ # You can also put a file called run_test.py in the recipe that will be run
+ # at test time.
+
+ # requires:
+ # Put any additional test requirements here. For example
+ # - nose
+
+about:
+ home: https://bitbucket.org/birkenfeld/sphinx-contrib/src/default/httpdomain/
+ license: BSD License
+ summary: 'Sphinx domain for HTTP APIs'
+
+# See
+# http://docs.continuum.io/conda/build.html for
+# more information about meta.yaml
diff --git a/doap_CLIMATE.rdf b/doap_CLIMATE.rdf
index 537f5b1..5b5ae95 100644
--- a/doap_CLIMATE.rdf
+++ b/doap_CLIMATE.rdf
@@ -36,6 +36,13 @@
<category rdf:resource="http://projects.apache.org/category/content" />
<release>
<Version>
+ <name>Apache Open Climate Workbench 1.1.0</name>
+ <created>2016-07-23</created>
+ <revision>1.1.0</revision>
+ </Version>
+ </release>
+ <release>
+ <Version>
<name>Apache Open Climate Workbench 1.0</name>
<created>2015-09-03</created>
<revision>1.0.0</revision>
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 4fec278..76f21fe 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -53,16 +53,16 @@
# General information about the project.
project = u'Apache Open Climate Workbench'
-copyright = u'2013, Apache Software Foundation'
+copyright = u'2016, Apache Software Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
-version = '1.0.0'
+version = '1.1.0'
# The full version, including alpha/beta/rc tags.
-release = '1.0.0'
+release = '1.1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/docs/source/ocw/overview.rst b/docs/source/ocw/overview.rst
index 9ef94da..a77f954 100644
--- a/docs/source/ocw/overview.rst
+++ b/docs/source/ocw/overview.rst
@@ -12,7 +12,7 @@
Common Data Abstraction
-----------------------
-The OCW :class:`dataset.Dataset` class is the primary data abstraction used throughout OCW. It facilitates the uniform handling of data throughout the toolkit and provides a few useful helper functions such as :func:`dataset.Dataset.spatial_boundaries` and :func:`dataset.Dataset.time_range`. Creating a new dataset object is straightforward but generally you will want to use an OCW data source to load the data for you.
+The OCW :class:`dataset.Dataset` class is the primary data abstraction used throughout OCW. It facilitates the uniform handling of data throughout the toolkit and provides a few useful helper functions such as :func:`dataset.Dataset.spatial_boundaries` and :func:`dataset.Dataset.temporal_boundaries`. Creating a new dataset object is straightforward but generally you will want to use an OCW data source to load the data for you.
Data Sources
------------
@@ -35,7 +35,7 @@
>>> import ocw.dataset_processor as dsp
>>> new_bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
->>> knmi_dataset = dsp.subset(new_bounds, knmi_dataset)
+>>> knmi_dataset = dsp.subset(knmi_dataset, new_bounds)
Temporally re-binning a dataset is great when the time step of the data is too fine grain for the desired use. For instance, perhaps we want to see a yearly trend but we have daily data. We would need to make the following call to adjust our dataset::
diff --git a/easy-ocw/ocw-conda-dependencies.txt b/easy-ocw/ocw-conda-dependencies.txt
index 4f6c937..665be4d 100644
--- a/easy-ocw/ocw-conda-dependencies.txt
+++ b/easy-ocw/ocw-conda-dependencies.txt
@@ -15,9 +15,9 @@
# specific language governing permissions and limitations
# under the License.
-numpy=1.10.4
-scipy=0.17.0
-matplotlib=1.5.1
-basemap=1.0.7
-netcdf4=1.2.2
-h5py=2.6.0
+numpy>=1.10.4
+scipy>=0.17.0
+matplotlib>=1.5.1
+basemap>=1.0.7
+netcdf4>=1.2.2
+h5py>=2.6.0
diff --git a/easy-ocw/ocw-pip-dependencies.txt b/easy-ocw/ocw-pip-dependencies.txt
index f901534..bd609dd 100644
--- a/easy-ocw/ocw-pip-dependencies.txt
+++ b/easy-ocw/ocw-pip-dependencies.txt
@@ -1,11 +1,11 @@
-requests==2.10.0
-bottle==0.12.9
-pydap==3.1.1
-webtest==2.0.21
-nose==1.3.7
-sphinx==1.4.4
-sphinxcontrib-httpdomain==1.5.0
-esgf-pyclient==0.1.6
-python-dateutil==2.5.3
-mock==2.0.0
-myproxyclient==1.4.3
+requests>=2.10.0
+bottle>=0.12.9
+pydap>=3.1.1
+webtest>=2.0.21
+nose>=1.3.7
+sphinx>=1.4.4
+sphinxcontrib-httpdomain>=1.5.0
+esgf-pyclient>=0.1.6
+python-dateutil>=2.5.3
+mock>=2.0.0
+myproxyclient>=1.4.3
diff --git a/examples/knmi_to_cru31_full_bias.py b/examples/knmi_to_cru31_full_bias.py
index e37e887..4c0abd9 100644
--- a/examples/knmi_to_cru31_full_bias.py
+++ b/examples/knmi_to_cru31_full_bias.py
@@ -83,7 +83,7 @@
cru_start = datetime.datetime.strptime(cru_31['start_date'], "%Y-%m-%d")
cru_end = datetime.datetime.strptime(cru_31['end_date'], "%Y-%m-%d")
-knmi_start, knmi_end = knmi_dataset.time_range()
+knmi_start, knmi_end = knmi_dataset.temporal_boundaries()
# Grab the Max Start Time
start_time = max([cru_start, knmi_start])
# Grab the Min End Time
@@ -112,7 +112,7 @@
# Create a Bounds object to use for subsetting
new_bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
-knmi_dataset = dsp.subset(new_bounds, knmi_dataset)
+knmi_dataset = dsp.subset(knmi_dataset, new_bounds)
print("CRU31_Dataset.values shape: (times, lats, lons) - %s" % (cru31_dataset.values.shape,))
print("KNMI_Dataset.values shape: (times, lats, lons) - %s \n" % (knmi_dataset.values.shape,))
diff --git a/examples/model_ensemble_to_rcmed.py b/examples/model_ensemble_to_rcmed.py
index a9303dd..fef1f9d 100644
--- a/examples/model_ensemble_to_rcmed.py
+++ b/examples/model_ensemble_to_rcmed.py
@@ -99,7 +99,7 @@
cru_start = datetime.datetime.strptime(cru_31['start_date'], "%Y-%m-%d")
cru_end = datetime.datetime.strptime(cru_31['end_date'], "%Y-%m-%d")
-knmi_start, knmi_end = knmi_dataset.time_range()
+knmi_start, knmi_end = knmi_dataset.temporal_boundaries()
# Set the Time Range to be the year 1989
start_time = datetime.datetime(1989,1,1)
end_time = datetime.datetime(1989,12,1)
@@ -131,8 +131,8 @@
new_bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
# Subset our model datasets so they are the same size
-knmi_dataset = dsp.subset(new_bounds, knmi_dataset)
-wrf311_dataset = dsp.subset(new_bounds, wrf311_dataset)
+knmi_dataset = dsp.subset(knmi_dataset, new_bounds)
+wrf311_dataset = dsp.subset(wrf311_dataset, new_bounds)
""" Spatially Regrid the Dataset Objects to a 1/2 degree grid """
# Using the bounds we will create a new set of lats and lons on 1/2 degree step
diff --git a/examples/multi_model_evaluation.py b/examples/multi_model_evaluation.py
index a09c526..0755279 100644
--- a/examples/multi_model_evaluation.py
+++ b/examples/multi_model_evaluation.py
@@ -91,7 +91,7 @@
CRU31 = dsp.temporal_rebin(CRU31, datetime.timedelta(days=30))
for member, each_target_dataset in enumerate(target_datasets):
- target_datasets[member] = dsp.subset(EVAL_BOUNDS, target_datasets[member])
+ target_datasets[member] = dsp.subset(target_datasets[member], EVAL_BOUNDS)
target_datasets[member] = dsp.water_flux_unit_conversion(target_datasets[member])
target_datasets[member] = dsp.temporal_rebin(target_datasets[member], datetime.timedelta(days=30))
diff --git a/examples/multi_model_taylor_diagram.py b/examples/multi_model_taylor_diagram.py
index 57dabdd..9ba8746 100644
--- a/examples/multi_model_taylor_diagram.py
+++ b/examples/multi_model_taylor_diagram.py
@@ -86,7 +86,7 @@
for member, each_target_dataset in enumerate(target_datasets):
target_datasets[member] = dsp.water_flux_unit_conversion(target_datasets[member])
target_datasets[member] = dsp.temporal_rebin(target_datasets[member], temporal_resolution = 'monthly')
- target_datasets[member] = dsp.subset(EVAL_BOUNDS, target_datasets[member])
+ target_datasets[member] = dsp.subset(target_datasets[member], EVAL_BOUNDS)
#Regrid
print("... regrid")
diff --git a/examples/subregions_portrait_diagram.py b/examples/subregions_portrait_diagram.py
index d8d982f..525cb26 100644
--- a/examples/subregions_portrait_diagram.py
+++ b/examples/subregions_portrait_diagram.py
@@ -76,7 +76,7 @@
CRU31 = dsp.water_flux_unit_conversion(CRU31)
for member, each_target_dataset in enumerate(target_datasets):
- target_datasets[member] = dsp.subset(EVAL_BOUNDS, target_datasets[member])
+ target_datasets[member] = dsp.subset(target_datasets[member], EVAL_BOUNDS)
target_datasets[member] = dsp.water_flux_unit_conversion(target_datasets[member])
target_datasets[member] = dsp.normalize_dataset_datetimes(target_datasets[member], 'monthly')
diff --git a/examples/taylor_diagram_example.py b/examples/taylor_diagram_example.py
index 90c6708..66ca175 100644
--- a/examples/taylor_diagram_example.py
+++ b/examples/taylor_diagram_example.py
@@ -62,8 +62,8 @@
# make a Bounds object and use it to subset our datasets.
################################################################################
subset = Bounds(-45, 42, -24, 60, datetime.datetime(1989, 1, 1), datetime.datetime(1989, 12, 1))
-knmi_dataset = dsp.subset(subset, knmi_dataset)
-wrf_dataset = dsp.subset(subset, wrf_dataset)
+knmi_dataset = dsp.subset(knmi_dataset, subset)
+wrf_dataset = dsp.subset(wrf_dataset, subset)
# Temporally re-bin the data into a monthly timestep.
################################################################################
diff --git a/examples/time_series_with_regions.py b/examples/time_series_with_regions.py
index ec9516d..8d9e5c0 100644
--- a/examples/time_series_with_regions.py
+++ b/examples/time_series_with_regions.py
@@ -74,7 +74,7 @@
CRU31 = dsp.normalize_dataset_datetimes(CRU31, 'monthly')
for member, each_target_dataset in enumerate(target_datasets):
- target_datasets[member] = dsp.subset(EVAL_BOUNDS, target_datasets[member])
+ target_datasets[member] = dsp.subset(target_datasets[member], EVAL_BOUNDS)
target_datasets[member] = dsp.water_flux_unit_conversion(target_datasets[member])
target_datasets[member] = dsp.normalize_dataset_datetimes(target_datasets[member], 'monthly')
@@ -122,7 +122,7 @@
firstTime = True
subset_name = regions[0]+"_CRU31"
#labels.append(subset_name) #for legend, uncomment this line
- subset = dsp.subset(list_of_regions[region_counter], CRU31, subset_name)
+ subset = dsp.subset(CRU31, list_of_regions[region_counter], subset_name)
tSeries = utils.calc_time_series(subset)
results.append(tSeries)
tSeries=[]
@@ -130,7 +130,9 @@
for member, each_target_dataset in enumerate(target_datasets):
subset_name = regions[0]+"_"+target_datasets[member].name
#labels.append(subset_name) #for legend, uncomment this line
- subset = dsp.subset(list_of_regions[region_counter],target_datasets[member],subset_name)
+ subset = dsp.subset(target_datasets[member],
+ list_of_regions[region_counter],
+ subset_name)
tSeries = utils.calc_time_series(subset)
results.append(tSeries)
tSeries=[]
diff --git a/ocw-ui/backend/processing.py b/ocw-ui/backend/processing.py
index e45b9c0..f925536 100644
--- a/ocw-ui/backend/processing.py
+++ b/ocw-ui/backend/processing.py
@@ -210,8 +210,8 @@
start,
end)
- ref_dataset = dsp.safe_subset(subset, ref_dataset)
- target_datasets = [dsp.safe_subset(subset, ds)
+ ref_dataset = dsp.safe_subset(ref_dataset, subset)
+ target_datasets = [dsp.safe_subset(ds, subset)
for ds
in target_datasets]
diff --git a/ocw-ui/backend/tests/test_processing.py b/ocw-ui/backend/tests/test_processing.py
index cc26b26..a1234de 100644
--- a/ocw-ui/backend/tests/test_processing.py
+++ b/ocw-ui/backend/tests/test_processing.py
@@ -88,7 +88,7 @@
def test_valid_load(self):
dataset = bp._load_rcmed_dataset_object(self.dataset_info, self.eval_bounds)
lat_min, lat_max, lon_min, lon_max = dataset.spatial_boundaries()
- start_time, end_time = dataset.time_range()
+ start_time, end_time = dataset.temporal_boundaries()
self.assertTrue(self.eval_bounds['lat_min'] <= lat_min)
self.assertTrue(self.eval_bounds['lat_max'] >= lat_max)
diff --git a/ocw/dataset.py b/ocw/dataset.py
index 78e6c14..f9c344e 100644
--- a/ocw/dataset.py
+++ b/ocw/dataset.py
@@ -91,7 +91,7 @@
return (float(numpy.min(self.lats)), float(numpy.max(self.lats)),
float(numpy.min(self.lons)), float(numpy.max(self.lons)))
- def time_range(self):
+ def temporal_boundaries(self):
'''Calculate the temporal range
:returns: The start and end date of the Dataset's temporal range as
@@ -200,16 +200,16 @@
def __str__(self):
lat_min, lat_max, lon_min, lon_max = self.spatial_boundaries()
- start, end = self.time_range()
+ start, end = self.temporal_boundaries()
lat_range = "({}, {})".format(lat_min, lon_min)
lon_range = "({}, {})".format(lon_min, lon_min)
- time_range = "({}, {})".format(start, end)
+ temporal_boundaries = "({}, {})".format(start, end)
formatted_repr = (
"<Dataset - name: {}, "
"lat-range: {}, "
"lon-range: {}, "
- "time_range: {}, "
+ "temporal_boundaries: {}, "
"var: {}, "
"units: {}>"
)
@@ -218,7 +218,7 @@
self.name if self.name != "" else None,
lat_range,
lon_range,
- time_range,
+ temporal_boundaries,
self.variable,
self.units
)
@@ -363,17 +363,17 @@
def __str__(self):
lat_range = "({}, {})".format(self._lat_min, self._lat_max)
lon_range = "({}, {})".format(self._lon_min, self._lon_max)
- time_range = "({}, {})".format(self._start, self._end)
+ temporal_boundaries = "({}, {})".format(self._start, self._end)
formatted_repr = (
"<Bounds - "
"lat-range: {}, "
"lon-range: {}, "
- "time_range: {}> "
+ "temporal_boundaries: {}> "
)
return formatted_repr.format(
lat_range,
lon_range,
- time_range,
+ temporal_boundaries,
)
diff --git a/ocw/dataset_processor.py b/ocw/dataset_processor.py
index 70323f3..2b5dc9b 100755
--- a/ocw/dataset_processor.py
+++ b/ocw/dataset_processor.py
@@ -32,7 +32,7 @@
logger = logging.getLogger(__name__)
-def temporal_subset(month_start, month_end, target_dataset,
+def temporal_subset(target_dataset, month_start, month_end,
average_each_year=False):
""" Temporally subset data given month_index.
@@ -362,7 +362,7 @@
return ensemble_dataset
-def subset(subregion, target_dataset, subregion_name=None):
+def subset(target_dataset, subregion, subregion_name=None):
'''Subset given dataset(s) with subregion information
:param subregion: The Bounds with which to subset the target Dataset.
@@ -385,7 +385,7 @@
subregion.end = target_dataset.times[-1]
# Ensure that the subregion information is well formed
- _are_bounds_contained_by_dataset(subregion, target_dataset)
+ _are_bounds_contained_by_dataset(target_dataset, subregion)
if not subregion_name:
subregion_name = target_dataset.name
@@ -395,7 +395,7 @@
target_dataset.times == subregion.start)[0][0]
end_time_index = np.where(target_dataset.times == subregion.end)[0][0]
target_dataset = temporal_slice(
- start_time_index, end_time_index, target_dataset)
+ target_dataset, start_time_index, end_time_index)
nt, ny, nx = target_dataset.values.shape
y_index, x_index = np.where(
(target_dataset.lats >= subregion.lat_max) | (
@@ -409,8 +409,8 @@
elif target_dataset.lats.ndim == 1 and target_dataset.lons.ndim == 1:
# Get subregion indices into subregion data
- dataset_slices = _get_subregion_slice_indices(subregion,
- target_dataset)
+ dataset_slices = _get_subregion_slice_indices(target_dataset,
+ subregion)
# Slice the values array with our calculated slice indices
if target_dataset.values.ndim == 2:
subset_values = ma.zeros([len(target_dataset.values[
@@ -455,7 +455,7 @@
)
-def temporal_slice(start_time_index, end_time_index, target_dataset):
+def temporal_slice(target_dataset, start_time_index, end_time_index):
'''Temporally slice given dataset(s) with subregion information. This does not
spatially subset the target_Dataset
@@ -483,7 +483,7 @@
return target_dataset
-def safe_subset(subregion, target_dataset, subregion_name=None):
+def safe_subset(target_dataset, subregion, subregion_name=None):
'''Safely subset given dataset with subregion information
A standard subset requires that the provided subregion be entirely
@@ -504,7 +504,7 @@
'''
lat_min, lat_max, lon_min, lon_max = target_dataset.spatial_boundaries()
- start, end = target_dataset.time_range()
+ start, end = target_dataset.temporal_boundaries()
if subregion.lat_min < lat_min:
subregion.lat_min = lat_min
@@ -526,7 +526,7 @@
if subregion.end > end:
subregion.end = end
- return subset(subregion, target_dataset, subregion_name)
+ return subset(target_dataset, subregion, subregion_name)
def normalize_dataset_datetimes(dataset, timestep):
@@ -1359,7 +1359,7 @@
return new_values
-def _are_bounds_contained_by_dataset(bounds, dataset):
+def _are_bounds_contained_by_dataset(dataset, bounds):
'''Check if a Dataset fully contains a bounds.
:param bounds: The Bounds object to check.
@@ -1372,7 +1372,7 @@
a ValueError otherwise
'''
lat_min, lat_max, lon_min, lon_max = dataset.spatial_boundaries()
- start, end = dataset.time_range()
+ start, end = dataset.temporal_boundaries()
errors = []
# TODO: THIS IS TERRIBLY inefficent and we need to use a geometry
@@ -1418,7 +1418,7 @@
raise ValueError(error_message)
-def _get_subregion_slice_indices(subregion, target_dataset):
+def _get_subregion_slice_indices(target_dataset, subregion):
'''Get the indices for slicing Dataset values to generate the subregion.
:param subregion: The Bounds that specify the subset of the Dataset
diff --git a/ocw/evaluation.py b/ocw/evaluation.py
index 8f01a68..cd06450 100644
--- a/ocw/evaluation.py
+++ b/ocw/evaluation.py
@@ -272,11 +272,11 @@
def _run_subregion_evaluation(self):
results = []
- new_refs = [DSP.subset(s, self.ref_dataset) for s in self.subregions]
+ new_refs = [DSP.subset(self.ref_dataset, s) for s in self.subregions]
for target in self.target_datasets:
results.append([])
- new_targets = [DSP.subset(s, target) for s in self.subregions]
+ new_targets = [DSP.subset(target, s) for s in self.subregions]
for metric in self.metrics:
results[-1].append([])
@@ -313,10 +313,11 @@
def _run_subregion_unary_evaluation(self):
unary_results = []
if self.ref_dataset:
- new_refs = [DSP.subset(s, self.ref_dataset) for s in self.subregions]
+ new_refs = [DSP.subset(self.ref_dataset, s)
+ for s in self.subregions]
new_targets = [
- [DSP.subset(s, t) for s in self.subregions]
+ [DSP.subset(t, s) for s in self.subregions]
for t in self.target_datasets
]
diff --git a/ocw/tests/test_dataset.py b/ocw/tests/test_dataset.py
index dcf6490..8b666c1 100644
--- a/ocw/tests/test_dataset.py
+++ b/ocw/tests/test_dataset.py
@@ -137,9 +137,9 @@
self.test_dataset.spatial_boundaries(),
(min(self.lat), max(self.lat), min(self.lon), max(self.lon)))
- def test_time_range(self):
+ def test_temporal_boundaries(self):
self.assertEqual(
- self.test_dataset.time_range(),
+ self.test_dataset.temporal_boundaries(),
(dt.datetime(2000, 1, 1), dt.datetime(2000, 12, 1)))
def test_spatial_resolution(self):
@@ -187,16 +187,16 @@
def test_str_(self):
dataset = self.test_dataset
lat_min, lat_max, lon_min, lon_max = dataset.spatial_boundaries()
- start, end = dataset.time_range()
+ start, end = dataset.temporal_boundaries()
lat_range = "({}, {})".format(lat_min, lon_min)
lon_range = "({}, {})".format(lon_min, lon_min)
- time_range = "({}, {})".format(start, end)
+ temporal_boundaries = "({}, {})".format(start, end)
formatted_repr = (
"<Dataset - name: {}, "
"lat-range: {}, "
"lon-range: {}, "
- "time_range: {}, "
+ "temporal_boundaries: {}, "
"var: {}, "
"units: {}>"
)
@@ -205,7 +205,7 @@
dataset.name if dataset.name != "" else None,
lat_range,
lon_range,
- time_range,
+ temporal_boundaries,
dataset.variable,
dataset.units
)
@@ -313,19 +313,19 @@
def test__str__(self):
lat_range = "({}, {})".format(self.bounds.lat_min, self.bounds.lat_max)
lon_range = "({}, {})".format(self.bounds.lon_min, self.bounds.lon_max)
- time_range = "({}, {})".format(self.bounds.start, self.bounds.end)
+ temporal_boundaries = "({}, {})".format(self.bounds.start, self.bounds.end)
formatted_repr = (
"<Bounds - "
"lat-range: {}, "
"lon-range: {}, "
- "time_range: {}> "
+ "temporal_boundaries: {}> "
)
output = formatted_repr.format(
lat_range,
lon_range,
- time_range,
+ temporal_boundaries,
)
self.assertEqual(str(self.bounds), output)
diff --git a/ocw/tests/test_dataset_processor.py b/ocw/tests/test_dataset_processor.py
index 9060070..627955a 100644
--- a/ocw/tests/test_dataset_processor.py
+++ b/ocw/tests/test_dataset_processor.py
@@ -37,22 +37,22 @@
self.dataset_times = np.array([datetime.datetime(year, month, 1)
for year in range(2000, 2010)
for month in range(1, 6)])
- self.tempSubset = dp.temporal_subset(1, 5, self.ten_year_dataset)
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset, 1, 5)
np.testing.assert_array_equal(
self.dataset_times, self.tempSubset.times)
def test_temporal_subset_with_average_time(self):
self.dataset_times = np.array([datetime.datetime(year, 2, 1)
for year in range(2000, 2010)])
- self.tempSubset = dp.temporal_subset(1, 3,
- self.ten_year_dataset,
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset,
+ 1, 3,
average_each_year=True)
np.testing.assert_array_equal(self.dataset_times,
self.tempSubset.times)
def test_temporal_subset_with_average_values(self):
- self.tempSubset = dp.temporal_subset(1, 3,
- self.ten_year_dataset,
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset,
+ 1, 3,
average_each_year=True)
self.dataset_values = np.ones([len(self.tempSubset.times),
len(self.ten_year_dataset.lats),
@@ -61,8 +61,8 @@
self.tempSubset.values)
def test_temporal_subset_attributes(self):
- self.tempSubset = dp.temporal_subset(1, 3,
- self.ten_year_dataset,
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset,
+ 1, 3,
average_each_year=True)
self.assertEqual(self.tempSubset.name, self.ten_year_dataset.name)
self.assertEqual(self.tempSubset.variable,
@@ -76,8 +76,8 @@
def test_temporal_subset_equal_start_end_month(self):
self.dataset_times = np.array([datetime.datetime(year, 1, 1)
for year in range(2000, 2010)])
- self.tempSubset = dp.temporal_subset(1, 1,
- self.ten_year_dataset,
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset,
+ 1, 1,
average_each_year=True)
np.testing.assert_array_equal(self.dataset_times,
self.tempSubset.times)
@@ -86,7 +86,7 @@
self.dataset_times = np.array([datetime.datetime(year, month, 1)
for year in range(2000, 2010)
for month in [1, 8, 9, 10, 11, 12]])
- self.tempSubset = dp.temporal_subset(8, 1, self.ten_year_dataset)
+ self.tempSubset = dp.temporal_subset(self.ten_year_dataset, 8, 1)
np.testing.assert_array_equal(
self.dataset_times, self.tempSubset.times)
@@ -206,9 +206,9 @@
end_index = 4
dates = np.array([datetime.datetime(2000, month, 1)
for month in range(start_index + 1, end_index + 2)])
- new_dataset = dp.temporal_slice(start_index,
- end_index,
- self.ten_year_dataset)
+ new_dataset = dp.temporal_slice(self.ten_year_dataset,
+ start_index,
+ end_index)
np.testing.assert_array_equal(new_dataset.times, dates)
def test_returned_dataset_values(self):
@@ -217,9 +217,9 @@
start_index = 1
end_index = 4
values = self.ten_year_dataset.values[start_index:end_index + 1]
- new_dataset = dp.temporal_slice(start_index,
- end_index,
- self.ten_year_dataset)
+ new_dataset = dp.temporal_slice(self.ten_year_dataset,
+ start_index,
+ end_index)
np.testing.assert_array_equal(new_dataset.values, values)
@@ -450,7 +450,7 @@
)
def test_subset(self):
- subset = dp.subset(self.subregion, self.target_dataset)
+ subset = dp.subset(self.target_dataset, self.subregion)
self.assertEqual(subset.lats.shape[0], 82)
self.assertSequenceEqual(list(np.array(range(-81, 82, 2))),
list(subset.lats))
@@ -459,17 +459,17 @@
self.assertEqual(subset.values.shape, (37, 82, 162))
def test_subset_name(self):
- subset = dp.subset(self.subregion, self.target_dataset)
+ subset = dp.subset(self.target_dataset, self.subregion)
self.assertEqual(subset.name, self.name)
def test_subset_name_propagation(self):
subset_name = 'foo_subset_name'
- subset = dp.subset(self.subregion, self.target_dataset, subset_name)
+ subset = dp.subset(self.target_dataset, self.subregion, subset_name)
self.assertEqual(subset.name, subset_name)
def test_subset_using_non_exact_spatial_bounds(self):
index_slices = dp._get_subregion_slice_indices(
- self.non_exact_spatial_subregion, self.target_dataset)
+ self.target_dataset, self.non_exact_spatial_subregion)
control_index_slices = {"lat_start": 5,
"lat_end": 84,
"lon_start": 10,
@@ -480,7 +480,7 @@
def test_subset_using_non_exact_temporal_bounds(self):
index_slices = dp._get_subregion_slice_indices(
- self.non_exact_temporal_subregion, self.target_dataset)
+ self.target_dataset, self.non_exact_temporal_subregion)
control_index_slices = {"lat_start": 5,
"lat_end": 84,
"lon_start": 10,
@@ -494,7 +494,7 @@
-81, 81,
-161, 161,
)
- subset = dp.subset(self.subregion, self.target_dataset)
+ subset = dp.subset(self.target_dataset, self.subregion)
times = np.array([datetime.datetime(year, month, 1)
for year in range(2000, 2010)
for month in range(1, 13)])
@@ -546,7 +546,7 @@
def test_partial_spatial_overlap(self):
'''Ensure that safe_subset can handle out of bounds spatial values'''
- ds = dp.safe_subset(self.spatial_out_of_bounds, self.target_dataset)
+ ds = dp.safe_subset(self.target_dataset, self.spatial_out_of_bounds)
spatial_bounds = ds.spatial_boundaries()
self.assertEquals(spatial_bounds[0], -60)
self.assertEquals(spatial_bounds[1], 60)
@@ -555,7 +555,7 @@
def test_partial_temporal_overlap(self):
'''Ensure that safe_subset can handle out of bounds temporal values'''
- ds = dp.safe_subset(self.temporal_out_of_bounds, self.target_dataset)
+ ds = dp.safe_subset(self.target_dataset, self.temporal_out_of_bounds)
temporal_bounds = ds.time_range()
start = datetime.datetime(2000, 1, 1)
end = datetime.datetime(2009, 12, 1)
@@ -564,9 +564,9 @@
self.assertEquals(temporal_bounds[1], end)
def test_entire_bounds_overlap(self):
- ds = dp.safe_subset(self.everything_out_of_bounds, self.target_dataset)
+ ds = dp.safe_subset(self.target_dataset, self.everything_out_of_bounds)
spatial_bounds = ds.spatial_boundaries()
- temporal_bounds = ds.time_range()
+ temporal_bounds = ds.temporal_boundaries()
start = datetime.datetime(2000, 1, 1)
end = datetime.datetime(2009, 12, 1)
@@ -594,32 +594,32 @@
def test_out_of_dataset_bounds_lat_min(self):
self.subregion.lat_min = -90
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
def test_out_of_dataset_bounds_lat_max(self):
self.subregion.lat_max = 90
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
def test_out_of_dataset_bounds_lon_min(self):
self.subregion.lon_min = -180
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
def test_out_of_dataset_bounds_lon_max(self):
self.subregion.lon_max = 180
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
def test_out_of_dataset_bounds_start(self):
self.subregion.start = datetime.datetime(1999, 1, 1)
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
def test_out_of_dataset_bounds_end(self):
self.subregion.end = datetime.datetime(2011, 1, 1)
with self.assertRaises(ValueError):
- dp.subset(self.subregion, self.target_dataset)
+ dp.subset(self.target_dataset, self.subregion)
class TestNetCDFWrite(unittest.TestCase):
diff --git a/ocw/utils.py b/ocw/utils.py
index cb47efe..2fab66f 100755
--- a/ocw/utils.py
+++ b/ocw/utils.py
@@ -374,8 +374,8 @@
start_time = []
end_time = []
for dataset in dataset_array:
- start_time.append(dataset.time_range()[0])
- end_time.append(dataset.time_range()[1])
+ start_time.append(dataset.temporal_boundaries()[0])
+ end_time.append(dataset.temporal_boundaries()[1])
return np.max(start_time), np.min(end_time)
diff --git a/ocw_config_runner/configuration_parsing.py b/ocw_config_runner/configuration_parsing.py
index 5c28249..89eab62 100644
--- a/ocw_config_runner/configuration_parsing.py
+++ b/ocw_config_runner/configuration_parsing.py
@@ -239,7 +239,7 @@
])
elif plot_type == 'time_series':
required_keys = set([
- 'time_range'
+ 'temporal_boundaries'
])
elif plot_type == 'portrait':
required_keys = set([
diff --git a/ocw_config_runner/configuration_writer.py b/ocw_config_runner/configuration_writer.py
index 8fc9242..5bbbb08 100644
--- a/ocw_config_runner/configuration_writer.py
+++ b/ocw_config_runner/configuration_writer.py
@@ -199,7 +199,7 @@
dataset_info = {'optional_args': {}}
min_lat, max_lat, min_lon, max_lon = dataset.spatial_boundaries()
- start_time, end_time = dataset.time_range()
+ start_time, end_time = dataset.temporal_boundaries()
dataset_info['data_source'] = 'rcmed'
dataset_info['dataset_id'] = dataset.origin['dataset_id']
@@ -279,7 +279,7 @@
for ds in datasets:
ds_lat_min, ds_lat_max, ds_lon_min, ds_lon_max = ds.spatial_boundaries()
- ds_start, ds_end = ds.time_range()
+ ds_start, ds_end = ds.temporal_boundaries()
if ds_lat_min < lat_min:
lat_min = ds_lat_min
diff --git a/ocw_config_runner/evaluation_creation.py b/ocw_config_runner/evaluation_creation.py
index 88394de..5236957 100644
--- a/ocw_config_runner/evaluation_creation.py
+++ b/ocw_config_runner/evaluation_creation.py
@@ -129,10 +129,10 @@
bounds = Bounds(subset[0], subset[1], subset[2], subset[3], start, end)
if reference:
- reference = dsp.safe_subset(bounds, reference)
+ reference = dsp.safe_subset(reference, bounds)
if targets:
- targets = [dsp.safe_subset(bounds, t) for t in targets]
+ targets = [dsp.safe_subset(t, bounds) for t in targets]
if temporal_time_delta:
resolution = timedelta(temporal_time_delta)
diff --git a/ocw_config_runner/example/time_series_plot_example.yaml b/ocw_config_runner/example/time_series_plot_example.yaml
index b5599cc..5e45229 100644
--- a/ocw_config_runner/example/time_series_plot_example.yaml
+++ b/ocw_config_runner/example/time_series_plot_example.yaml
@@ -24,7 +24,7 @@
plots:
- type: time_series
- time_range: monthly
+ temporal_boundaries: monthly
subregions:
- [-10.0, 0.0, -19.0, 19.0]
diff --git a/ocw_config_runner/plot_generation.py b/ocw_config_runner/plot_generation.py
index 392331d..3fc3adb 100644
--- a/ocw_config_runner/plot_generation.py
+++ b/ocw_config_runner/plot_generation.py
@@ -141,11 +141,11 @@
def _draw_time_series_plot(evaluation, plot_config):
""""""
- time_range_info = plot_config['time_range']
+ temporal_boundaries_info = plot_config['temporal_boundaries']
ref_ds = evaluation.ref_dataset
target_ds = evaluation.target_datasets
- if time_range_info == 'monthly':
+ if temporal_boundaries_info == 'monthly':
ref_ds.values, ref_ds.times = utils.calc_climatology_monthly(ref_ds)
for t in target_ds:
@@ -163,8 +163,8 @@
labels = []
subset = dsp.subset(
- bound,
ref_ds,
+ bound,
subregion_name="R{}_{}".format(bound_count, ref_ds.name)
)
@@ -173,8 +173,8 @@
for t in target_ds:
subset = dsp.subset(
- bound,
t,
+ bound,
subregion_name="R{}_{}".format(bound_count, t.name)
)
results.append(utils.calc_time_series(subset))
diff --git a/ocw_config_runner/tests/test_config_writer.py b/ocw_config_runner/tests/test_config_writer.py
index c961447..ed22417 100644
--- a/ocw_config_runner/tests/test_config_writer.py
+++ b/ocw_config_runner/tests/test_config_writer.py
@@ -535,7 +535,7 @@
subset = out['subset']
ds_lat_min, ds_lat_max, ds_lon_min, ds_lon_max = self.dataset.spatial_boundaries()
- start, end = self.dataset.time_range()
+ start, end = self.dataset.temporal_boundaries()
self.assertEqual(ds_lat_min, subset[0])
self.assertEqual(ds_lat_max, subset[1])
@@ -557,7 +557,7 @@
subset = out['subset']
ds_lat_min, ds_lat_max, ds_lon_min, ds_lon_max = self.dataset.spatial_boundaries()
- start, end = self.dataset.time_range()
+ start, end = self.dataset.temporal_boundaries()
self.assertEqual(ds_lat_min, subset[0])
# Check that we actually used the different max lat value that we
diff --git a/test.sh b/test.sh
new file mode 100755
index 0000000..24ed0fe
--- /dev/null
+++ b/test.sh
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+set -e # causes the shell to exit if any subcommand or pipeline returns a non-zero status.
+echo ""
+echo "---------------- Running Smoke Tests ---------------"
+python test_smoke.py
+echo "---------------- Smoke Tests Successfully Completed---------------"
+echo ""
+echo "---------------- Running Unit Tests ---------------"
+nosetests -v --with-coverage --cover-package=ocw --nocapture
+echo "---------------- All Tests successfully completed ---------------"
+
diff --git a/test_smoke.py b/test_smoke.py
new file mode 100644
index 0000000..f07eaa3
--- /dev/null
+++ b/test_smoke.py
@@ -0,0 +1,139 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from pkg_resources import VersionConflict, DistributionNotFound, \
+ require
+from ocw.tests.test_local import create_netcdf_object
+from ocw.data_source import local
+from ocw import dataset_processor as dsp
+import os
+
+PIP_DEPENDENCIES_FILE = 'easy-ocw/ocw-pip-dependencies.txt'
+CONDA_DEPENDENCIES_FILE = 'easy-ocw/ocw-conda-dependencies.txt'
+SUCCESS_MARK = '\033[92m' + u'\u2713' + '\033[0m'
+FAILURE_MARK = '\033[91m' + u'\u274C' + '\033[0m'
+
+
+def fail(prefix):
+ print prefix + " " + FAILURE_MARK
+
+
+def success(prefix):
+ print prefix + " " + SUCCESS_MARK
+
+
+def check_dependencies(file):
+ ''' Verify all necessary dependencies are installed '''
+ for dep in file:
+ dep = dep.replace('\n', '')
+ # skip all comments and blank lines in dependency file
+ if '#' in dep or not dep:
+ continue
+ try:
+ require(dep)
+ success(dep)
+ except DistributionNotFound as df:
+ fail(dep)
+ dep = str(df).split(' ')[1][1:-1]
+ print '\n' + dep + ' dependency missing.'
+ print 'Please install it using "pip/conda install ' + dep + '"'
+ fail("\nDependencies")
+ end()
+ except VersionConflict as vc:
+ fail(dep)
+ print ("\nRequired version and installed version differ for the "
+ "following package:\n"
+ "Required version: " + dep)
+ dep_name = str(vc).split(' ')[0][1:] # First element is '('
+ dep_version = str(vc).split(' ')[1]
+ print "Installed version: " + dep_name + "==" + dep_version
+ fail("\nDependencies")
+ end()
+
+
+def check_dataset_loading():
+ ''' Try loading test dataset '''
+ dataset = None
+ try:
+ file_path = create_netcdf_object()
+ dataset = local.load_file(file_path, variable_name='value')
+ except Exception as e:
+ fail("\nDataset loading")
+ print "The following error occured"
+ print e
+ end(dataset)
+ success("\nDataset loading")
+ return dataset
+
+
+def check_some_dataset_functions(dataset):
+ ''' Run a subset of dataset functions and check for any exception '''
+ try:
+ dataset.spatial_boundaries()
+ dataset.time_range()
+ dataset.spatial_resolution()
+ except Exception as e:
+ fail("\nDataset functions")
+ print "Following error occured:"
+ print str(e)
+ end(dataset)
+ success("\nDataset functions")
+
+
+def check_some_dsp_functions(dataset):
+ '''
+ Run a subset of dataset processor functions and check for
+ any kind of exception.
+ '''
+ try:
+ dsp.temporal_rebin(dataset, 'annual')
+ dsp.ensemble([dataset])
+ except Exception as e:
+ fail("\nDataset processor functions")
+ print "Following error occured:"
+ print str(e)
+ end()
+ finally:
+ os.remove(dataset.origin['path'])
+ success("\nDataset processor functions")
+
+
+def end(dataset=None):
+ ''' Exit program with status 1 '''
+ if dataset:
+ os.remove(dataset.origin['path'])
+ 'End program execution with return code 1'
+ print '\033[91m' + "Some checks were unsuccessful"
+ print "Please Fix them and run the test again." + '\033[0m'
+ exit(1)
+
+
+def main():
+ pip_file = open(PIP_DEPENDENCIES_FILE, 'r')
+ conda_file = open(CONDA_DEPENDENCIES_FILE, 'r')
+ print "Checking installed dependencies\n"
+ check_dependencies(conda_file)
+ check_dependencies(pip_file)
+ success("\nDependencies")
+ dataset = check_dataset_loading()
+ check_some_dataset_functions(dataset)
+ check_some_dsp_functions(dataset)
+ success("\nAll checks successfully completed")
+ return 0
+
+if __name__ == '__main__':
+ main()