Software for in situ data analytics

Clone this repo:
  1. 7664731 /version 0.3.0 by nchung · 3 months ago master 0.3.0
  2. 3609f2c feat: SDAP-395: CDMS JSON Schema Endpoint (#7) by wphyojpl · 3 months ago
  3. 691e941 fix: SDAP-394 - added meta as default column in in-situ endpoint response (#6) by wphyojpl · 3 months ago
  4. 4e93f75 feat: CLI script to ingest S3 files (#5) by wphyojpl · 3 months ago
  5. f77a833 Merge pull request #4 from wphyojpl/master by Nga Chung · 4 months ago 0.2.0

Insitu Data in Parquet format stored in S3

How to ingest a insitu json file to Parquet

  • Assumption: K8s is successfully deployed

  • Download this repo

  • (optional) create different python3.6 environment

  • install dependencies

      python3 install
  • setup AWS tokens

      export AWS_ACCESS_KEY_ID=xxx
      export AWS_SECRET_ACCESS_KEY=xxx
      export AWS_SESSION_TOKEN=really.long.token
      export AWS_REGION=us-west-2
    • alternatively the default profile under ~/.aws/credentials can be setup as well
  • setup current directory to PYTHONPATH

  • run the script:

      python3 -m parquet_cli.ingest_s3 --help
    • sample script:

        python3 -m parquet_cli.ingest_s3 \
          --LOG_LEVEL 30 \
          --CDMS_DOMAIN  \
          --CDMS_BEARER_TOKEN Mock-CDMS-Flask-Token  \
          --PARQUET_META_TBL_NAME cdms_parquet_meta_dev_v1  \
          --BUCKET_NAME cdms-dev-ncar-in-situ-stage  \
          --KEY_PREFIX cdms_icoads_2017-01-01.json


  • how to replace parquet file partially
> Finally! This is now a feature in Spark 2.3.0: SPARK-20236
> To use it, you need to set the spark.sql.sources.partitionOverwriteMode setting to dynamic, the dataset needs to be partitioned, and the write mode overwrite. Example:

data.toDF().write.mode("overwrite").format("parquet").partitionBy("date", "name").save("s3://path/to/somewhere")