/version 0.3.0
1 file changed
tree: 4a918247dc3f2171ef2b1f5b9bfb03b1600a7e52
  1. docker/
  2. documentations/
  3. k8s_spark/
  4. local.spark.cluster/
  5. parquet_cli/
  6. parquet_flask/
  7. terraform/
  8. tests/
  9. .gitignore
  10. CHANGELOG.md
  11. CONTRIBUTING.md
  12. Deployment-in-AWS.md
  13. in_situ_record_schema.json
  14. in_situ_schema.json
  15. LICENSE
  16. README.md
  17. s3a.parquet.performance.issue.md
  18. setup.py
README.md

Insitu Data in Parquet format stored in S3

How to ingest a insitu json file to Parquet

  • Assumption: K8s is successfully deployed

  • Download this repo

  • (optional) create different python3.6 environment

  • install dependencies

      python3 setup.py install
    
  • setup AWS tokens

      export AWS_ACCESS_KEY_ID=xxx
      export AWS_SECRET_ACCESS_KEY=xxx
      export AWS_SESSION_TOKEN=really.long.token
      export AWS_REGION=us-west-2
    
    • alternatively the default profile under ~/.aws/credentials can be setup as well
  • setup current directory to PYTHONPATH

      PYTHONPATH="${PYTHONPATH}:/absolute/path/to/current/dir/"
    
  • run the script:

      python3 -m parquet_cli.ingest_s3 --help
    
    • sample script:

        python3 -m parquet_cli.ingest_s3 \
          --LOG_LEVEL 30 \
          --CDMS_DOMAIN https://doms.jpl.nasa.gov/insitu  \
          --CDMS_BEARER_TOKEN Mock-CDMS-Flask-Token  \
          --PARQUET_META_TBL_NAME cdms_parquet_meta_dev_v1  \
          --BUCKET_NAME cdms-dev-ncar-in-situ-stage  \
          --KEY_PREFIX cdms_icoads_2017-01-01.json
      

Ref:

  • how to replace parquet file partially
https://stackoverflow.com/questions/38487667/overwrite-specific-partitions-in-spark-dataframe-write-method?noredirect=1&lq=1
> Finally! This is now a feature in Spark 2.3.0: SPARK-20236
> To use it, you need to set the spark.sql.sources.partitionOverwriteMode setting to dynamic, the dataset needs to be partitioned, and the write mode overwrite. Example:
> https://stackoverflow.com/questions/50006526/overwrite-only-some-partitions-in-a-partitioned-spark-dataset

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.toDF().write.mode("overwrite").format("parquet").partitionBy("date", "name").save("s3://path/to/somewhere")