Backup of Gobblin GitHub Wiki
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index f8858a2..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,51 +0,0 @@
-.classpath*
-.project*
-.settings
-.idea
-*.iml
-*.iws
-*.ipr
-*.swp
-*.swo
-*.log
-**/.classpath
-**/.project
-**/.settings
-**/.idea
-**/*.iml
-**/*.iws
-**/*.ipr
-**/*.swp
-**/*.swo
-**/*.log
-*/bin
-build
-**/build
-.gradle
-**/.gradle
-test-output
-**/test-output
-dist
-target
-tmp
-out
-**/out
-output
-gobblin-test/basicTest
-gobblin-test/jobOutput
-gobblin-test/state-store
-gobblin-tesTaskt/metrics
-gobblin-test/byteman
-gobblin-test/locks
-gobblin-test/mr-jobs
-/eclipse_build
-.project
-.classpath
-/build
-*/build
-out/
-*/bin/
-**/mainGeneratedDataTemplate
-**/mainGeneratedRest
-gobblin-dist*
-gobblin.tar.gz
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 46d33fb..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,35 +0,0 @@
-language: java
-
-addons:
-  apt:
-    packages:
-      - libaio-dev
-      - libdbus-glib-1-dev
-      - xsltproc
-
-before_cache:
-  - rm -f $HOME/.gradle/caches/modules-2/modules-2.lock
-
-cache:
-  directories:
-    - $HOME/.gradle/caches/
-    - $HOME/.gradle/wrapper/
-
-before_install:
-  - "export DISPLAY=:99.0"
-  - "sh -e /etc/init.d/xvfb start"
-  - sleep 3 # give xvfb some time to start
-
-install: ./travis/build.sh
-
-script: ./travis/test.sh
-
-after_failure: ./travis/junit-errors-to-stdout.sh
-
-env:
-  - USEHADOOP2=false
-  - USEHADOOP2=true
-
-jdk:
-  - oraclejdk8
-  - oraclejdk7
\ No newline at end of file
diff --git a/CHANGELOG b/CHANGELOG
deleted file mode 100644
index 852bb0d..0000000
--- a/CHANGELOG
+++ /dev/null
@@ -1,134 +0,0 @@
-GOBBLIN 0.6.2
-=============
-
-## NEW FEATURES
-* [Admin Dashboard] Added a web based GUI for exploring running and finished jobs in a running Gobblin daemon (thanks Eric Ogren).
-* [Admin Dashboard] Added a CLI for finding jobs in the job history store and seeing their run details (thanks Eric Ogren).
-* [Configuration Management] WIP: Configuration management library. Will enable Gobblin to be dataset aware, ie. to dynamically load and apply different configurations to each dataset in a single Gobblin job.
-** APIs: APIs for configuration stores and configuration client.
-** Configuration Library: loads low level configurations from a configuration store, resolves configuration dependencies / imports, and performs value interpolation.
-* [Distcp] Allow using *.ready files as markers for files that should be copied, and deletion of *.ready files once the file has been copied.
-* [Distcp] Added file filters to recursive copyable dataset for distcp. Allows to only copy files satisfying a filter under a base directory.
-* [Distcp] Copied files that fail to be published are persisted for future runs. Future runs can recover the already copied file instead of re-doing the byte transfer.
-* [JDBC] Can use password encryption for JDBC sources.
-* [YARN] Added email notifications on YARN application shutdown.
-* [YARN] Added event notifications on YARN container status changes.
-* [Metrics] Added metric filters based on name and type of the metrics.
-* [Dataset Management] POC embedded sql for config-driven retention management.
-* [Exactly Once] POC for Gobblin managed exactly once semantics on publisher.
-
-## BUG FIXES
-* **Core** File based source includes previously failed WorkUnits event if there are no new files in the source (thanks Joel Baranick).
-* **Core** Ensure that output file list does not contain duplicates due to task retries (thanks Joel Baranick).
-* **Core** Fix NPE in CliOptions.
-* **Core/YARN** Limit Props -> Typesafe Config conversion to a few keys to prevent overwriting of certain properties.
-* **Utility** Fixed writer mkdirs for S3.
-* **Metrics** Made Scheduled Reporter threads into daemon threads to prevent hanging application.
-* **Metrics** Fixed enqueuing of events on event reporters that was causing job failure if event frequency was too high.
-* **Build** Fix POM dependencies on gobblin-rest-api.
-* **Build** Added conjars and cloudera repository to all projects (fixes builds for certain users).
-* **Build** Fix the distribution tarball creation (thanks Joel Baranick).
-* **Build** Added option to exclude Hadoop and Hive jars from distribution tarball.
-* **Build** Removed log4j.properties from runtime resources.
-* **Compaction** Fixed main class in compaction manifest file (thanks Lorand Bendig).
-* **JDBC** Correctly close JDBC connections.
-
-## IMPROVEMENTS
-* [Build] Add support for publishing libraries to maven local (thanks Joel Baranick).
-* [Build] In preparation to Gradle 2 migration, added ext. prefix to custom gradle properties.
-* [Build] Can generate project dependencies graph in dot format.
-* [Metrics] Migrated Kafka reporter and Output stream reporter to Root Metrics Reporter managed reporting.
-* [Metrics] The last metric emission in the application has a "final" tag for easier Hive identification.
-* [Metrics] Metrics for Gobblin on YARN include cluster tags.
-* [Hive] Upgraded Hive to version 1.0.1.
-* [Distcp] Add file size to distcp success notifications.
-* [Distcp] Each work unit in distcp contains exactly one Copyable File.
-* [Distcp] Copy source can set upstream timestamps for SLA events emitted on publish time.
-* [Scheduling] Added Gobblin Oozie config files.
-* [Documentation] Improved javadocs.
-
-
-GOBBLIN 0.6.1
--------------
-
-## BUG FIXES
-
-- **Build/release** Adding build instrumentation for generation of rest-api-* artifacts
-- **Build/release** Various fixes to decrease reliance of unit tests on timing.
-
-## OTHER IMPROVEMENTS
-
-- **Core** Add stability annotations for APIs. We plan on starting to annotate interfaces/classes to specify how likely the API is to change.
-- **Runtime** Made it an option for the job scheduler to wait for running jobs to complete
-- **Runtime** Fixing dangling MetricContext creation in ForkOperator
-
-## EXTERNAL CONTRIBUTIONS
-
-- kadaan, joel.baranick:
-  + Added a fix for a hadoop issue (https://issues.apache.org/jira/browse/HADOOP-12169) which affects the s3a filesystem and results in duplicate files appearing in the results of ListStatus. In the process, extracted a base class for all FsHelper classes based on the hadoop filesystem.
-
-
-GOBBLIN 0.6.0
---------------
-
-NEW FEATURES
-
-* [Compaction] Added M/R compaction/de-duping for hourly data
-* [Compaction] Added late data handling for hourly and daily M/R compaction: https://github.com/linkedin/gobblin/wiki/Compaction#handling-late-records; added support for triggering M/R compaction if late data exceeds a threshold
-* [I/O] Added support for using Hive SerDe's through HiveWritableHdfsDataWriter
-* [I/O] Added the concept of data partitioning to writers: https://github.com/linkedin/gobblin/wiki/Partitioned-Writers
-* [Runtime] Added CliLocalJobLauncher for launching single jobs from the command line.
-* [Converters] Added AvroSchemaFieldRemover that can remove specific fields from a (possibly recursive) Avro schema.
-* [DQ] Added new row-level policies RecordTimestampLowerBoundPolicy and AvroRecordTimestampLowerBoundPolicy for checking if a record timestamp is too far in the past.
-* [Kafka] Added schema registry API to KafkaAvroExtractor which enables supports for various Kafka schema registry implementations (e.g. Confluent's schema registry). 
-* [Build/Release] Added build instrumentation to publish artifacts to Maven Central
-
-BUG FIXES
-
-* [Retention management] Trash handles deletes of files already existing in trash correctly.
-* [Kafka] Fixed an issue that may cause Kafka adapter to miss data if the fork fails.
-
-OTHER IMPROVEMENTS
-
-* [Runtime] Added metrics for job executions
-* [Metrics] Added a root metric context to keep track of GC of metrics and metric contexts and make sure those are properly reported
-* [Compaction] Improve topic isolation in MRCompactor
-* [Build/release] Java version compatibility raised to Java 7.
-* [Runtime] Deprecated COMMIT_ON_PARTIAL_SUCCESS and added a new policy for successful extracts
-* [Retention management] Async trash implementation for parallel deletions.
-* [Metrics] Added tracking events emission when data gets published
-* [Retention management] Added support for parallel execution to the dataset cleaner
-* [Runtime] Update job execution info in the execution history store upon every task completion
-
-INCUBATION
-
-Note: these are new features which are under active development and may be subject to significant changes.
-
-* [gobblin-ce] Adding support for Gobblin Continuous Execution on Yarn
-* [distcp-ng] Started work on bulk transfer (file copies) using Gobblin
-* [distcp-ng] Added a light-weight Hadoop FileSystem implementation for file transfer from SFTP
-* [gobblin-config] Added API for dataset driven
-
-EXTERNAL CONTRIBUTIONS
-
-We would like to thank all our external contributors for helping improve Gobblin.
-
-* kadaan, joel.baranick: 
-    - Separate publisher filesystem from writer filesystem
-    - Support for generating Idea projects with the correct language level (Java 7)
-    - Fixed yarn conf path in gobblin-yarn.sh
-* mwol(Maurice Wolter) 
-    - Implemented new class AvroCombineFileSplit which stores the avro schema for each split, determined by the corresponding input file.
-* cheleb(NOUGUIER Olivier)
-    - Add support for maven install
-* dvenkateshappa 
-    - bugifx to RestApiExtractor.java
-    - Added an excluding column list , which can be used for salesforce configuration with huge list of columns.
-* klyr (Julien Barbot) 
-    - bugfix to gobblin-mapreduce.sh
-* gheo21 
-    - Bumped kafka dependency to 2.11
-* ahollenbach (Andrew Hollenbach)
-   -  configuration improvements for standalone mode
-* lbendig (Lorand Bendig)
-   - fixed a bug in DatasetState creation
diff --git "a/Camus-\342\206\222-Gobblin-Migration.md" "b/Camus-\342\206\222-Gobblin-Migration.md"
new file mode 100644
index 0000000..1cafbf2
--- /dev/null
+++ "b/Camus-\342\206\222-Gobblin-Migration.md"
@@ -0,0 +1,100 @@
+This page is a guide for [Camus](https://github.com/linkedin/camus) → Gobblin migration, intended for users and organizations currently using Camus. Camus is LinkedIn's previous-generation Kafka-HDFS pipeline.
+
+It is recommended that one read [Kafka-HDFS Ingestion](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion) before reading this page. This page focuses on the Kafka-related configuration properties in Gobblin vs Camus.
+
+## Advantages of Migrating to Gobblin
+
+**Operability**: Gobblin is a generic data ingestion pipeline that supports not only Kafka but several other data sources, and new data sources can be easily added. If you have multiple data sources, using a single tool to ingest data from these sources is a lot more pleasant operationally than deploying a separate tool for each source.
+
+**Performance**: The performance of Gobblin in MapReduce mode is comparable to Camus', and faster in some cases (e.g., the average record size of a Kafka topic is not proportional to the average time of pulling a topic) due to a better mapper load balancing algorithm. In the new continuous ingestion mode (currently under development), the performance of Gobblin will further improve.
+
+**Metrics and Monitoring**: Gobblin has a powerful end-to-end metrics collection and reporting module for monitoring purpose, making it much easier to spot problems in time and find the root causes. See the "Gobblin Metrics" section in the wiki and [this post](https://github.com/linkedin/gobblin/wiki/Gobblin-Metrics:-next-generation-instrumentation-for-applications) for more details.
+
+**Features**: In addition to the above, there are several other useful features for Kafka-HDFS ingestion in Gobblin that are not available in Camus, e.g., [handling late events in data compaction](https://github.com/linkedin/gobblin/wiki/Compaction#handling-late-records); dataset retention management; converter and quality checker; all-or-nothing job commit policy, etc. Also, Gobblin is under active development and new features are added frequently.
+
+## Kafka Ingestion Related Job Config Properties
+
+This list contains Kafka-specific properties. For general configuration properties please refer to [Configuration Properties Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary).
+
+### Config properties for pulling Kafka topics
+
+| Gobblin Property   |  Corresponding Camus Property | Default value |
+|----------|-------------|:------:|
+| topic.whitelist |  kafka.whitelist.topics | .*|
+| topic.blacklist |  kafka.blacklist.topics  | a^ |
+| mr.job.max.mappers | mapred.map.tasks | 100 |
+| kafka.brokers  | kafka.host.url | (required) |
+| topics.move.to.latest.offset  | kafka.move.to.last.offset.list | empty |
+| bootstrap.with.offset  | none | latest |
+| reset.on.offset.out.of.range | none | nearest |
+
+Remarks:
+
+* topic.whitelist and topic.blacklist supports regex.
+* topics.move.to.latest.offset: Topics in this list will always start from the latest offset (i.e., no records will be pulled). To move all topics to the latest offset, use "all". This property is useful in Camus for moving a new topic to the latest offset, but in Gobblin it should rarely, if ever, be used, since you can use bootstrap.with.offset to achieve the same purpose more conveniently.
+* bootstrap with offset: For new topics / partitions, this property controls whether they start at the earliest offset or the latest offset. Possible values: earliest, latest, skip.
+* reset.on.offset.out.of.range: This property controls what to do if a partition's previously persisted offset is out of the range of the currently available offsets. Possible values: earliest (always move to earliest available offset), latest (always move to latest available offset), nearest (move to earliest if the previously persisted offset is smaller than the earliest offset, otherwise move to latest), skip (skip this partition).
+
+### Config properties for compaction
+
+Gobblin compaction is comparable to Camus sweeper, which can deduplicate records in an input folder. Compaction is useful for Kafka-HDFS ingestion for two reasons:
+
+1. Although Gobblin guarantees no loss of data, in rare circumstances where data is published on HDFS but checkpoints failed to be persisted into the state store, it may pull the same records twice.
+
+2. If you have a hierarchy of Kafka clusters where topics are replicated among the Kafka clusters, duplicate records may be generated during replication.
+
+Below are the configuration properties related to compaction. For more information please visit the MapReduce Compaction section in the [Compaction](https://github.com/linkedin/gobblin/wiki/Compaction) page.
+
+| Gobblin Property   |  Corresponding Camus Property | Default value |
+|----------|-------------|:------:|
+| compaction.input.dir |  camus.sweeper.source.dir | (required) |
+| compaction.dest.dir |  camus.sweeper.dest.dir | (required) |
+| compaction.input.subdir |  camus.sweeper.source.dir | hourly |
+| compaction.dest.subdir |  camus.sweeper.dest.dir | daily |
+| compaction.tmp.dest.dir | camus.sweeper.tmp.dir | /tmp/gobblin-compaction |
+| compaction.whitelist |  camus.sweeper.whitelist | .* |
+| compaction.blacklist |  camus.sweeper.blacklist | a^ |
+| compaction.high.priority.topics | none |a^|
+| compaction.normal.priority.topics | none |a^|
+| compaction.input.deduplicated | none | false |
+| compaction.output.deduplicated | none | true |
+| compaction.file.system.uri | none ||
+| compaction.timebased.max.time.ago |  none | 3d |
+| compaction.timebased.min.time.ago | none | 1d |
+| compaction.timebased.folder.pattern | none | YYYY/mm/dd |
+| compaction.thread.pool.size | num.threads | 20 |
+| compaction.max.num.reducers | max.files | 900 |
+| compaction.target.output.file.size | camus.sweeper.target.file.size | 268435456 |
+| compaction.mapred.min.split.size | mapred.min.split.size | 268435456 |
+| compaction.mapred.max.split.size | mapred.max.split.size | 268435456 |
+| compaction.mr.job.timeout.minutes | none | |
+
+Remarks:
+
+* The following properties support regex: compaction.whitelist, compaction.blacklist, compaction.high.priority.topics, compaction.normal.priority.topics
+* compaction.input.dir is the parent folder of input topics, e.g., /data/kafka_topics, which contains topic folders such as /data/kafka_topics/Topic1, /data/kafka_topics/Topic2, etc. Note that Camus uses camus.sweeper.source.dir both as the input folder of Camus sweeper (i.e., compaction), and as the output folder for ingesting Kafka topics. In Gobblin, one should use data.publisher.final.dir as the output folder for ingesting Kafka topics.
+* compaction.output.dir is the parent folder of output topics, e.g., /data/compacted_kafka_topics.
+* compaction.input.subdir is the subdir name of output topics, if exists. For example, if the input topics are partitioned by hour, e.g., /data/kafka_topics/Topic1/hourly/2015/10/06/20, then compaction.input.subdir should be 'hourly'.
+* compaction.output.subdir is the subdir name of output topics, if exists. For example, if you want to publish compacted data into day-partitioned folders, e.g., /data/compacted_kafka_topics/Topic1/daily/2015/10/06, then compaction.output.subdir should be 'daily'.
+* There are 3 priority levels: high, normal, low. Topics not included in compaction.high.priority.topics or compaction.normal.priority.topics are considered low priority.
+* compaction.input.deduplicated and compaction.output.deduplicated controls the behavior of the compaction regarding deduplication. Please see the [Compaction](https://github.com/linkedin/gobblin/wiki/Compaction) page for more details.
+* compaction.timebased.max.time.ago and compaction.timebased.min.time.ago controls the earliest and latest input folders to process, when using `MRCompactorTimeBasedJobPropCreator`. The format is ?m?d?h, e.g., 3m or 2d10h (m = month, not minute). For example, suppose `compaction.timebased.max.time.ago=3d`, `compaction.timebased.min.time.ago=1d` and the current time is 10/07 9am. Folders whose timestamps are before 10/04 9am, or folders whose timestamps are after 10/06 9am will not be processed.
+* compaction.timebased.folder.pattern: time pattern in the folder path, when using `MRCompactorTimeBasedJobPropCreator`. This should come after `compaction.input.subdir`, e.g., if the input folder to a compaction job is `/data/compacted_kafka_topics/Topic1/daily/2015/10/06`, this property should be `YYYY/mm/dd`.
+* compaction.thread.pool.size: how many compaction MR jobs to run concurrently.
+* compaction.max.num.reducers: max number of reducers for each compaction job
+* compaction.target.output.file.size: This also controls the number of reducers. The number of reducers will be the smaller of `compaction.max.num.reducers` and `<input data size> / compaction.target.output.file.size`.
+* compaction.mapred.min.split.size and compaction.mapred.max.split.size are used to control the number of mappers.
+
+## Deployment and Checkpoint Management
+
+For deploying Gobblin in standalone or MapReduce mode, please see the [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) page.
+
+Gobblin and Camus checkpoint management are similar in the sense that they both create checkpoint files in each run, and the next run will load the checkpoint files created by the previous run and start from there. Their difference is that Gobblin creates a single checkpoint file per job run or per dataset per job run, and provides two job commit policies: `full` and `partial`. In `full` mode, data are only commited for the job/dataset if all workunits of the job/dataset succeeded. Otherwise, the checkpoint of all workunits/datasets will be rolled back. Camus writes one checkpoint file per mapper, and only supports the `partial` mode. For Gobblin's state management, please refer to the [Wiki page](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks) for more information.
+
+## Migrating from Camus to Gobblin in Production
+
+If you are currently running in production, you can use the following steps to migrate to Gobblin:
+1. Deploy Gobblin based on the instructions in [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) and [Kafka-HDFS Ingestion](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion), and set the properties mentioned in this page as well as other relevant properties in [Configuration Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary) to the appropriate values.
+2. Whitelist the topics in Gobblin ingestion, and schedule Gobblin to run at your desired frequency.
+3. Once Gobblin starts running, blacklist these topics in Camus.
+4. If compaction is applicable to you, set up the compaction jobs based on instructions in [Kafka-HDFS Ingestion](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion) and [Compaction](https://github.com/linkedin/gobblin/wiki/Compaction). Whitelist the topics you want to migrate in Gobblin and blacklist them in Camus.
\ No newline at end of file
diff --git a/gobblin-docs/developer-guide/CodingStyle.md b/CodingStyle.md
similarity index 100%
rename from gobblin-docs/developer-guide/CodingStyle.md
rename to CodingStyle.md
diff --git a/Compaction.md b/Compaction.md
new file mode 100644
index 0000000..a93372b
--- /dev/null
+++ b/Compaction.md
@@ -0,0 +1,296 @@
+Table of Contents
+--------------------
+
+* [MapReduce Compactor](https://github.com/linkedin/gobblin/wiki/Compaction#mapreduce-compactor)
+* [Hive Compactor](https://github.com/linkedin/gobblin/wiki/Compaction#hive-compactor)
+   - [Handling Late Records](https://github.com/linkedin/gobblin/wiki/Compaction#handling-late-records) 
+
+Compaction can be used to post-process files pulled by Gobblin with certain semantics. Deduplication is one of the common reasons to do compaction, e.g., you may want to
+
+* deduplicate on all fields of the records.
+* deduplicate on key fields of the records, keep the one with the latest timestamp for records with the same key.
+
+This is because duplicates can be generated for multiple reasons including both intended and unintended:
+
+* For ingestion from data sources with mutable records (e.g., relational databases), instead of ingesting a full snapshot of a table every time, one may wish to ingest only the records that were changed since the previous run (i.e., delta records), and merge these delta records with previously generated snapshots in a compaction. In this case, for records with the same primary key, the one with the latest timestamp should be kept.
+* The data source you ingest from may have duplicate records, e.g., if you have a hierarchy of Kafka clusters where topics are replicated among the Kafka clusters, duplicate records may be generated during the replication. In some data sources duplicate records may also be produced by the data producer.
+* In rare circumstances, Gobblin may pull the same data twice, thus creating duplicate records. This may happen if Gobblin publishes the data successfully, but for some reason fails to persist the checkpoints (watermarks) into the state store.
+
+Gobblin provides two compactors out-of-the-box, a MapReduce compactor and a Hive compactor.
+
+## MapReduce Compactor
+
+The MapReduce compactor can be used to deduplicate on all or certain fields of the records. For duplicate records, one of them will be preserved; there is no guarantee which one will be preserved.
+
+A use case of MapReduce Compactor is for Kafka records deduplication. We will use the following example use case to explain the MapReduce Compactor.
+
+### Example Use Case
+
+Suppose we ingest data from a Kafka broker, and we would like to publish the data by hour and by day, both of which are deduplicated:
+- Data in the Kafka broker is first ingested into an `hourly_staging` folder, e.g., `/data/kafka_topics/NewUserEvent/hourly_staging/2015/10/29/08...`
+- A compaction with deduplication runs hourly, consumes data in `hourly_staging` and publish data into `hourly`, e.g., `/data/kafka_topics/NewUserEvent/hourly/2015/10/29/08...`
+- A non-deduping compaction runs daily, consumes data in `hourly` and publish data into `daily`, e.g., `/data/kafka_topics/NewUserEvent/daily/2015/10/29...`
+
+### Basic Usage
+
+`MRCompactor.compact()` is the entry point for MapReduce-based compaction. The input data to be compacted is specified by `compaction.input.dir`. Each subdir under `compaction.input.dir` is considered a _topic_. Each topic may contain multiple _datasets_, each of which is a unit for compaction. It is up to `MRCompactorJobPropCreator` to determine what is a dataset under each topic. If a topic has multiple levels of folders, subsequent levels can be specified using `compaction.input.subdir`.
+
+In the above example use case, for hourly compaction, each dataset contains an hour's data in the `hourly_staging` folder, e.g., `/data/kafka_topics/NewUserEvent/hourly_staging/2015/10/29/08`; for daily compaction, each dataset contains 24 hourly folder of a day, e.g., `/data/kafka_topics/NewUserEvent/hourly/2015/10/29`. In hourly compaction, you may use the following config properties:
+
+```
+compaction.input.dir=/data/kafka_topics
+compaction.dest.dir=/data/kafka_topics
+compaction.input.subdir=hourly_staging
+compaction.dest.subdir=hourly
+compaction.folder.pattern=YYYY/MM/dd
+compaction.timebased.max.time.ago=3h
+compaction.timebased.min.time.ago=1h
+compaction.jobprops.creator.class=gobblin.compaction.mapreduce.MRCompactorTimeBasedJobPropCreator
+compaction.job.runner.class=gobblin.compaction.mapreduce.avro.MRCompactorAvroKeyDedupJobRunner (if your data is Avro)
+```
+
+If your data format is not Avro, you can implement a different job runner class for deduplicating your data format. `compaction.timebased.max.time.ago` and `compaction.timebased.min.time.ago` are used to control the earliest and latest folders to be processed, e.g., if there values are 3h and 1h, respectively, and suppose the current time is 10/07 9:20am, it will not process folders on 10/07/06 or before (since they are more than 3 hours ago) or folders on 10/07/09 (since they are less than 1 hour ago).
+
+### Non-deduping Compaction via Map-only Jobs
+
+There are two types of Non-deduping compaction.
+- **Type 1**: deduplication is not needed, for example you simply want to consolidate files in 24 hourly folders into a single daily folder.
+- **Type 2**: deduplication is needed, i.e., the published data should not contain duplicates, but the input data are already deduplicated. The daily compaction in the above example use case is of this type.
+
+Property `compaction.input.deduplicated` specifies whether the input data are deduplicated (default is false), and property `compaction.output.deduplicated` specifies whether the output data should be deduplicated (default is true). For type 1 deduplication, set both to false. For type 2 deduplication, set both to true.
+
+The reason these two types of compaction need to be separated is because of late data handling, which we will explain next.
+
+### Handling Late Records
+
+Late records are records that arrived at a folder after compaction on this folder has started. We explain how Gobblin handles late records using the following example.
+
+
+
+In this use case, both hourly compaction and daily compaction need a mechanism to handle late records. For hourly compaction, late records are records that arrived at an `hourly_staging` folder after the hourly compaction of that folder has started. It is similar for daily compaction.
+
+**Compaction with Deduplication**
+
+For a compaction with deduplication (i.e., hourly compaction in the above use case), there are two options to deal with late data:
+- **Option 1**: if there are late data, re-do the compaction. For example, you may run the hourly compaction multiple times per hour. The first run will do the normal compaction, and in each subsequent run, if it detects late data in a folder, it will re-do compaction for that folder.
+
+To do so, set `compaction.job.overwrite.output.dir=true` and `compaction.recompact.from.input.for.late.data=true`.
+
+Please note the following when you use this option: (1) this means that your already-published data will be re-published if late data are detected; (2) this is potentially dangerous if your input folders have short retention periods. For example, suppose `hourly_staging` folders have a 2-day retention period, i.e., folder `/data/kafka_topics/NewUserEvent/hourly_staging/2015/10/29` will be deleted on 2015/10/31. If, after 2015/10/31, new data arrived at this folder and you re-compact this folder and publish the data to `hourly`, all original data will be gone. To avoid this problem you may set `compaction.timebased.max.time.ago=2d` so that compaction will not be performed on a folder more than 2 days ago. However, this means that if a late record is late for more than 2 days, it will never be published into `hourly`.
+
+- **Option 2**: (this is the default option) if there are late data, copy the late data into a `[output_subdir]/_late` folder, e.g., for hourly compaction, late data in `hourly_staging` will be copied to `hourly_late` folders, e.g., `/data/kafka_topics/NewUserEvent/hourly_late/2015/10/29...`. 
+
+If re-compaction is not necessary, this is all you need to do. If re-compaction is needed, you may schedule or manually invoke a re-compaction job which will re-compact by consuming data in both `hourly` and `hourly_late`. For this job, you need to set `compaction.job.overwrite.output.dir=true` and `compaction.recompact.from.dest.paths=true`.
+
+Note that this re-compaction is different from the re-compaction in Option 1: this re-compaction consumes data in output folders (i.e., `hourly`) whereas the re-compaction in Option 1 consumes data in input folders (i.e., `hourly_staging`).
+
+**Compaction without Deduplication**
+
+For a compaction without deduplication, if it is type 2, the same two options above apply. If it is type 1, late data will simply be copied to the output folder.
+
+**How to Determine if a Data File is Late**
+
+Every time a compaction finishes (except the case below), Gobblin will create a file named `_COMPACTION_COMPLETE` in the compaction output folder. This file contains the timestamp of when the compaction job starts. All files in the input folder with earlier modification timestamps have been compacted. Next time the compaction runs, files in the input folder with later timestamps are considered late data.
+
+The `_COMPACTION_COMPLETE` file will be only be created if it is a regular compaction that consumes input data (including compaction jobs that just copy late data to the output folder or the `[output_subdir]/_late` folder without launching an MR job). It will not be created if it is a re-compaction that consumes output data. This is because whether a file in the input folder is a late file depends on whether it has been compacted or moved into the output folder, which is not affected by a re-compaction that consumes output data.
+
+One way of reducing the chance of seeing late records is to verify data completeness before running compaction, which will be explained next.
+
+### Verifying Data Completeness Before Compaction
+
+Besides aborting the compaction job for a dataset if new data in the input folder is found, another way to reduce the chance of seeing late events is to verify the completeness of input data before running compaction. To do so, set `compaction.completeness.verification.enabled=true`, extend `DataCompletenessVerifier.AbstractRunner` and put in your verification logic, and pass it via `compaction.completeness.verification.class`.
+
+When data completeness verification is enabled, `MRCompactor` will verify data completeness for the input datasets, and meanwhile speculatively start the compaction MR jobs. When the compaction MR job for a dataset finishes, if the completeness of the dataset is verified, its compacted data will be published, otherwise it is discarded, and the compaction MR job for this dataset will be launched again with a reduced priority.
+
+It is possible to control which topics should or should not be verified via `compaction.completeness.verification.whitelist` and `compaction.completeness.verification.blacklist`. It is also possible to set a timeout for data completeness verification via `compaction.completeness.verification.timeout.minutes`. A dataset whose completeness verification timed out can be configured to be either compacted anyway or not compacted.
+
+## Hive Compactor
+
+The Hive compactor can be used to merge a snapshot with one or multiple deltas. It assumes the snapshot and the deltas meet the following requirements:
+
+1. Snapshot and all deltas are in Avro format.
+1. Snapshot and all deltas have the same primary key attributes (they do not need to have the same schema).
+2. Snapshot is pulled earlier than all deltas. Therefore if a key appears in both snapshot and deltas, the one in the snapshot should be discarded.
+3. The deltas are pulled one after another, and ordered in ascending order of pull time. If a key appears in both the ith delta and the jth delta (i < j), the one in the jth delta survives.
+
+In the near future we also plan to support selecting records by timestamps (rather than which file they appear). This is useful if the snapshot and the deltas are pulled in parallel, where if a key has multiple occurrences we should keep the one with the latest timestamp.
+
+Note that since delta tables don't have information of deleted records, such information is only available the next time the full snapshot is pulled.
+
+### Usage
+
+After building Gobblin (i.e., `./gradlew clean build`), a zipped file `build/gobblin-compaction/distributions/gobblin-compaction.tar.gz` should be created. It contains a jar file (`gobblin-compaction.jar`), a folder of dependencies (`gobblin-compaction_lib`), and a log4j config file (`log4j.xml`).
+
+To run compaction, extract it into a folder, go to that folder and run 
+
+`java -jar gobblin-compaction.jar <global-config-file>`
+
+If for whatever reason (e.g., your Hadoop cluster is in secure mode) you need to run the jar using Hadoop or Yarn, then you first need to make sure the correct log4j config file is used, since there is another log4j config file in the Hadoop classpath. To do so, run the following two commands:
+
+```
+export HADOOP_CLASSPATH=.
+export HADOOP_USER_CLASSPATH_FIRST=true
+```
+
+The first command adds the current directory to the Hadoop classpath, and the second command tells Hadoop/Yarn to prioritize user's classpath. Then you can run the compaction jar:
+
+`hadoop jar gobblin-compaction.jar <global-config-file>`
+
+or
+
+`yarn jar gobblin-compaction.jar <global-config-file>`
+
+The merged data will be written to the HDFS directory specified in `output.datalocation`, as one or more Avro files. The schema of the output data will be the same as the schema of the last delta (which is the last pulled data and thus has the latest schema).
+
+The provided log4j config file (`log4j.xml`) prints logs from Gobblin compaction classes to the console, and writes logs from other classes (e.g., Hive classes) to logs/gobblin-compaction.log. Note that for drop table queries (`DROP TABLE IF EXISTS <tablename>`), the Hive JDBC client will throw `NoSuchObjectException` if the table doesn't exist. This is normal and such exceptions should be ignored.
+
+#### Global Config Properties (example: compaction.properties)
+
+(1) Required:
+- _**compaction.config.dir**_
+
+This is the the compaction jobconfig directory. Each file in this directory should be a jobconfig file (described in the next section).
+
+(2) Optional:
+
+- _**hadoop.configfile.***_
+
+Hadoop configuration files that should be loaded
+(e.g., hadoop.configfile.coresite.xml=/export/apps/hadoop/latest/etc/hadoop/core-site.xml)
+
+- _**hdfs.uri**_
+
+If property `fs.defaultFS` (or `fs.default.name`) is specified in the hadoop config file, then this property is not needed. However, if it is specified, it will override `fs.defaultFS` (or `fs.default.name`).
+
+If `fs.defaultFS` or `fs.default.name` is not specified in the hadoop config file, and this property is also not specified, then the default value "hdfs://localhost:9000" will be used.
+
+- _**hiveserver.version**_ (default: 2)
+
+Either 1 or 2.
+
+- _**hiveserver.connection.string**_
+
+- _**hiveserver.url**_
+
+- _**hiveserver.user**_ (default: "")
+
+- _**hiveserver.password**_ (default: "")
+
+If `hiveserver.connection.string` is specified, it will be used to connect to hiveserver.
+
+If `hiveserver.connection.string` is not specified but `hiveserver.url` is specified, then it uses (`hiveserver.url`, `hiveserver.user`, `hiveserver.password`) to connect to hiveserver.
+
+If neither `hiveserver.connection.string` nor `hiveserver.url` is specified, then embedded hiveserver will be used (i.e., `jdbc:hive://` if `hiveserver.version=1`, `jdbc:hive2://` if `hiveserver.version=2`)
+
+- _**hivesite.dir**_
+
+Directory that contains hive-site.xml, if hive-site.xml should be loaded.
+
+- _**hive.***_
+
+Any hive config property. (e.g., `hive.join.cache.size`). If specified, it will override the corresponding property in hive-site.xml.
+
+
+#### Job Config Properties (example: jobconf/task1.conf)
+
+(1) Required:
+
+- _**snapshot.pkey**_
+
+comma separated primary key attributes of the snapshot table
+
+- _**snapshot.datalocation**_
+
+snapshot data directory in HDFS
+
+- _**delta.i.pkey**_ (i = 1, 2...)
+
+the primary key of ith delta table
+(the primary key of snapshot and all deltas should be the same)
+
+- _**delta.i.datalocation**_ (i = 1, 2...)
+
+ith delta table's data directory in HDFS
+
+- _**output.datalocation**_
+
+the HDFS data directory for the output
+(make sure you have write permission on this directory)
+
+(2) Optional:
+
+- _**snapshot.name**_ (default: randomly generated name)
+
+prefix name of the snapshot table. The table name will be snapshot.name + random suffix
+
+- _**snapshot.schemalocation**_
+
+snapshot table's schema location in HDFS. If not specified, schema will be extracted from the data.
+
+- _**delta.i.name**_ (default: randomly generated name)
+
+prefix name of the ith delta table. The table name will be delta.i.name + random suffix
+
+- _**delta.i.schemalocation**_
+
+ith delta table's schema location in HDFS. If not specified, schema will be extracted from the data.
+
+- _**output.name**_ (default: randomly generated name)
+
+prefix name of the output table. The table name will be output.name + random suffix
+
+- _**hive.db.name**_ (default: default)
+
+the database name to be used. This database should already exist, and you should have write permission on it.
+
+- _**hive.queue.name**_ (default: default)
+
+queue name to be used.
+
+- _**hive.use.mapjoin**_ (default: if not specified in the global config file, then false)
+
+whether map-side join should be turned on. If specified both in this property and in the global config file (hive.*), this property takes precedences. 
+
+- _**hive.mapjoin.smalltable.filesize**_ (default: if not specified in the global config file, then use Hive's default value)
+
+if hive.use.mapjoin = true, mapjoin will be used if the small table size is smaller than hive.mapjoin.smalltable.filesize (in bytes).
+If specified both in this property and in the global config file (hive.*), this property takes precedences. 
+
+- _**hive.tmpschema.dir**_ (default: the parent dir of the data location dir where the data is used to extract the schema)
+
+If we need to extract schema from data, this dir is for the extracted schema.
+Note that if you do not have write permission on the default dir, you must specify this property as a dir where you do have write permission.
+
+- _**snapshot.copydata**_ (default: false)
+
+Set to true if you don't want to (or are unable to) create external table on snapshot.datalocation. A copy of the snapshot data will be created in `hive.tmpdata.dir`, and will be removed after the compaction.
+
+This property should be set to true if either of the following two situations applies:
+
+(i) You don't have write permission to `snapshot.datalocation`. If so, once you create an external table on `snapshot.datalocation`, you may not be able to drop it. This is a Hive bug and for more information, see [this page](https://issues.apache.org/jira/browse/HIVE-9020), which includes a Hive patch for the bug.
+
+(ii) You want to use a certain subset of files in `snapshot.datalocation` (e.g., `snapshot.datalocation` contains both .csv and .avro files but you only want to use .avro files)
+
+- _**delta.i.copydata**_ (i = 1, 2...) (default: false)
+
+Similar as `snapshot.copydata`
+
+- _**hive.tmpdata.dir**_ (default: "/")
+
+If `snapshot.copydata` = true or `delta.i.copydata` = true, the data will be copied to this dir. You should have write permission to this dir.
+
+- _**snapshot.dataformat.extension.name**_ (default: "")
+
+If `snapshot.copydata` = true, then only those data files whose extension is `snapshot.dataformat` will be moved to `hive.tmpdata.dir`.
+
+- _**delta.i.dataformat.extension.name**_ (default: "")
+
+Similar as `snapshot.dataformat.extension.name`. 
+
+- _**mapreduce.job.num.reducers**_
+
+Number of reducers for the job.
+
+- _**timing.file**_ (default: time.txt)
+
+A file where the running time of each compaction job is printed.
\ No newline at end of file
diff --git a/Configuration-Properties-Glossary.md b/Configuration-Properties-Glossary.md
new file mode 100644
index 0000000..e847011
--- /dev/null
+++ b/Configuration-Properties-Glossary.md
@@ -0,0 +1,1057 @@
+Configuration properties are key/value pairs that are set in text files. They include system properties that control how Gobblin will pull data, and control what source Gobblin will pull the data from. Configuration files end in some user-specified suffix (by default text files ending in `.pull` or `.job` are recognized as configs files, although this is configurable). Each file represents some unit of work that needs to be done in Gobblin. For example, there will typically be a separate configuration file for each table that needs to be pulled from a database.  
+  
+The first section of this document contains all the required properties needed to run a basic Gobblin job. The rest of the document is dedicated to other properties that can be used to configure Gobbin jobs. The description of each configuration parameter will often refer to core Gobblin concepts and terms. If any of these terms are confusing, check out the [Gobblin Archiecture](https://github.com/linkedin/gobblin/wiki/Gobblin-Architecture) page for a more detailed explanation of how Gobblin works. The GitHub repo also contains sample config files for specific sources. For example, there are sample config files to connect to MySQL databases and [SFTP servers](https://github.com/linkedin/gobblin/tree/master/source/src/main/resources).  
+
+Gobblin also allows you to specify a global configuration file that contains common properties that are shared across all jobs. The [Job Launcher Properties](#Job-Launcher-Properties) section has more information on how to specify a global properties file.  
+
+# Table of Contents
+* [Properties File Format](#Properties-File-Format)
+* [Creating a Basic Properties File](#Creating-a-Basic-Properties-File)   
+* [Job Launcher Properties](#Job-Launcher-Properties)  
+  * [Common Job Launcher Properties](#Common-Launcher-Properties)  
+  * [SchedulerDaemon Properties](#SchedulerDaemon-Properties)  
+  * [CliMRJobLauncher Properties](#CliMRJobLauncher-Properties)  
+  * [AzkabanJobLauncher Properties](#AzkabanJobLauncher-Properties)  
+* [Job Type Properties](#Job-Type-Properties)  
+  * [Common Job Type Properties](#Common-Type-Properties)  
+  * [LocalJobLauncher Properties](#LocalJobLauncher-Properties)  
+  * [MRJobLauncher Properties](#MRJobLauncher-Properties)  
+* [Task Execution Properties](#Task-Execution-Properties)  
+* [State Store Properties](#State-Store-Properties)  
+* [Metrics Properties](#Metrics-Properties)  
+* [Email Alert Properties](#Email-Alert-Properties)  
+* [Source Properties](#Source-Properties)  
+  * [Common Source Properties](#Common-Source-Properties)  
+  * [QueryBasedExtractor Properties](#QueryBasedExtractor-Properties) 
+    * [JdbcExtractor Properties](#JdbcExtractor-Properties)  
+  * [FileBasedExtractor Properties](#FileBasedExtractor-Properties)  
+    * [SftpExtractor Properties](#SftpExtractor-Properties)  
+* [Converter Properties](#Converter-Properties)
+  * [CsvToJsonConverter Properties](#CsvToJsonConverter-Properties)    
+  * [JsonIntermediateToAvroConverter Properties](#JsonIntermediateToAvroConverter-Properties)  
+  * [AvroFilterConverter Properties](#AvroFilterConverter-Properties)  
+  * [AvroFieldRetrieverConverter Properties](#AvroFieldRetrieverConverter-Properties)  
+* [Quality Checker Properties](#Quality-Checker-Properties)  
+* [Writer Properties](#Writer-Properties)  
+* [Data Publisher Properties](#Data-Publisher-Properties)  
+* [Generic Properties](#Generic-Properties)  
+
+# Properties File Format <a name="Properties-File-Format"></a>
+
+Configuration properties files follow the [Java Properties text file format](http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load(java.io.Reader)). Further, file includes and variable expansion/interpolation as defined in [Apache Commons Configuration](http://commons.apache.org/proper/commons-configuration/userguide_v1.10/user_guide.html) are also supported.
+
+Example:
+
+* common.properties
+
+```
+    writer.staging.dir=/path/to/staging/dir/
+    writer.output.dir=/path/to/output/dir/
+```
+* my-job.properties
+
+```    
+    include=common.properties
+    
+    job.name=MyFirstJob
+```
+
+# Creating a Basic Properties File <a name="Creating-a-Basic-Properties-File"></a>
+In order to create a basic configuration property there is a small set of required properties that need to be set. The following properties are required to run any Gobblin job:
+* `job.name` - Name of the job  
+* `source.class` - Fully qualified path to the Source class responsible for connecting to the data source  
+* `writer.staging.dir` - The directory each task will write staging data to  
+* `writer.output.dir` - The directory each task will commit data to  
+* `data.publisher.final.dir` - The final directory where all the data will be published
+* `state.store.dir` - The directory where state-store files will be written  
+
+For more information on each property, check out the comprehensive list below.  
+
+If only these properties are set, then by default, Gobblin will run in Local mode, as opposed to running on Hadoop M/R. This means Gobblin will write Avro data to the local filesystem. In order to write to HDFS, set the `writer.fs.uri` property to the URI of the HDFS NameNode that data should be written to. Since the default version of Gobblin writes data in Avro format, the writer expects Avro records to be passed to it. Thus, any data pulled from an external source must be converted to Avro before it can be written out to the filesystem.  
+
+The `source.class` property is one of the most important properties in Gobblin. It specifies what Source class to use. The Source class is responsible for determining what work needs to be done during each run of the job, and specifies what Extractor to use in order to read over each sub-unit of data. Examples of Source classes are [WikipediaSource](https://github.com/linkedin/gobblin/blob/master/example/src/main/java/com/linkedin/uif/example/wikipedia/WikipediaSource.java) and [SimpleJsonSource](https://github.com/linkedin/gobblin/blob/master/example/src/main/java/com/linkedin/uif/example/simplejson/SimpleJsonSource.java), which can be found in the GitHub repository. For more information on Sources and Extractors, check out the [Architecture](Gobblin-Architecture) page.  
+
+Typically, Gobblin jobs will be launched using the launch scripts in the `bin` folder. These scripts allow jobs to be launched on the local machine (e.g. SchedulerDaemon) or on Hadoop (e.g. CliMRJobLauncher). Check out the Job Launcher section below, to see the configuration difference between each launch mode. The [Deployment](Gobblin Deployment) page also has more information on the different ways a job can be launched.  
+
+# Job Launcher Properties <a name="Job-Launcher-Properties"></a>
+Gobblin jobs can be launched and scheduled in a variety of ways. They can be scheduled via a Quartz scheduler or through [Azkaban](https://github.com/azkaban/azkaban). Jobs can also be run without a scheduler via the Command Line. For more information on launching Gobblin jobs, check out the [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) page.
+## Common Job Launcher Properties <a name="Common-Launcher-Properties"></a>
+These properties are common to both the Job Launcher and the Command Line.
+#### job.name 
+###### Description
+The name of the job to run. This name must be unique within a single Gobblin instance.
+###### Default Value
+None
+###### Required
+Yes
+#### job.group 
+###### Description
+A way to group logically similar jobs together.
+###### Default Value
+None
+###### Required
+No
+#### job.description 
+###### Description
+A description of what the jobs does.
+###### Default Value
+None
+###### Required
+No
+#### job.lock.dir
+###### Description
+Directory where job locks are stored. Job locks are used by the scheduler to ensure two executions of a job do not run at the same time. If a job is scheduled to run, Gobblin will first check this directory to see if there is a lock file for the job. If there is one, it will not run the job, if there isn't one then it will run the job.
+###### Default Value
+None
+###### Required
+No
+#### job.lock.enabled
+###### Description
+If set to true job locks are enabled, if set to false they are disabled
+###### Default Value
+True
+###### Required
+No
+#### job.runonce 
+###### Description
+A boolean specifying whether the job will be only once, or multiple times. If set to true the job will only be run once even if a job.schedule is specified. If set to false and a job.schedule is specified then it will run according to the schedule. If set false and a job.schedule is not specified, it will run only once.
+###### Default Value
+False 
+###### Required
+No
+#### job.disabled 
+###### Description
+Whether the job is disabled or not. If set to true, then Gobblin will not run this job.
+###### Default Value
+False
+###### Required
+No
+## SchedulerDaemon Properties <a name="SchedulerDaemon-Properties"></a>
+This class is used to schedule Gobblin jobs on Quartz. The job can be launched via the command line, and takes in the location of a global configuration file as a parameter. This configuration file should have the property `jobconf.dir` in order to specify the location of all the `.job` or `.pull` files. Another core difference, is that the global configuration file for the SchedulerDaemon must specify the following properties:
+
+* `writer.staging.dir`  
+* `writer.output.dir`  
+* `data.publisher.final.dir`  
+* `state.store.dir`  
+
+They should not be set in individual job files, as they are system-level parameters.
+For more information on how to set the configuration parameters for jobs launched through the SchedulerDaemon, check out the [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) page.
+#### job.schedule 
+###### Description
+Cron-Based job schedule. This schedule only applies to jobs that run using Quartz.
+###### Default Value
+None
+###### Required
+No
+#### jobconf.dir
+###### Description
+When running in local mode, Gobblin will check this directory for any configuration files. Each configuration file should correspond to a separate Gobblin job, and each one should in a suffix specified by the jobconf.extensions parameter.
+###### Default Value
+None
+###### Required
+No
+#### jobconf.extensions
+###### Description
+Comma-separated list of supported job configuration file extensions. When running in local mode, Gobblin will only pick up job files ending in these suffixes.
+###### Default Value
+pull,job
+###### Required
+No
+#### jobconf.monitor.interval
+###### Description
+Controls how often Gobblin checks the jobconf.dir for new configuration files, or for configuration file updates. The parameter is measured in milliseconds.
+###### Default Value
+300000
+###### Required
+No
+## CliMRJobLauncher Properties <a name="CliMRJobLauncher-Properties"></a>
+There are no configuration parameters specific to CliMRJobLauncher. This class is used to launch Gobblin jobs on Hadoop from the command line, the jobs are not scheduled. Common properties are set using the `--sysconfig` option when launching jobs via the command line. For more information on how to set the configuration parameters for jobs launched through the command line, check out the [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) page.  
+## AzkabanJobLauncher Properties <a name="AzkabanJobLauncher-Properties"></a>
+There are no configuration parameters specific to AzkabanJobLauncher. This class is used to schedule Gobblin jobs on Azkaban. Common properties can be set through Azkaban by creating a `.properties` file, check out the [Azkaban Documentation](http://azkaban.github.io/) for more information. For more information on how to set the configuration parameters for jobs scheduled through the Azkaban, check out the [Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) page.
+# Job Type Properties <a name="Job-Type-Properties"></a>
+## Common Job Type Properties <a name="Common-Job-Type-Properties"></a>
+#### launcher.type 
+###### Description
+Job launcher type; one of LOCAL, MAPREDUCE, YARN. LOCAL mode runs on a single machine (LocalJobLauncher), MAPREDUCE runs on a Hadoop cluster (MRJobLauncher), and YARN runs on a YARN cluster (not implemented yet).
+###### Default Value
+LOCAL 
+###### Required
+No
+## LocalJobLauncher Properties <a name="LocalJobLauncher-Properties"></a>
+There are no configuration parameters specific to LocalJobLauncher. The LocalJobLauncher will launch a Hadoop job on a single machine. If launcher.type is set to LOCAL then this class will be used to launch the job.
+Properties required by the MRJobLauncher class.
+#### framework.jars 
+###### Description
+Comma-separated list of jars the Gobblin framework depends on. These jars will be added to the classpath of the job, and to the classpath of any containers the job launches.
+###### Default Value
+None
+###### Required
+No
+#### job.jars 
+###### Description
+Comma-separated list of jar files the job depends on. These jars will be added to the classpath of the job, and to the classpath of any containers the job launches.
+###### Default Value
+None
+###### Required
+No
+#### job.local.files 
+###### Description
+Comma-separated list of local files the job depends on. These files will be available to any map tasks that get launched via the DistributedCache.
+###### Default Value
+None
+###### Required
+No
+#### job.hdfs.files 
+###### Description
+Comma-separated list of files on HDFS the job depends on. These files will be available to any map tasks that get launched via the DistributedCache.
+###### Default Value
+None
+###### Required
+No
+## MRJobLauncher Properties <a name="MRJobLauncher-Properties"></a>
+#### mr.job.root.dir 
+###### Description
+Working directory for a Gobblin Hadoop MR job. Gobblin uses this to write intermediate data, such as the workunit state files that are used by each map task. This has to be a path on HDFS.
+###### Default Value
+None
+###### Required
+Yes
+#### mr.job.max.mappers 
+###### Description
+Maximum number of mappers to use in a Gobblin Hadoop MR job. If no explicit limit is set then a map task for each workunit will be launched. If the value of this properties is less than the number of workunits created, then each map task will run multiple tasks.
+###### Default Value
+None
+###### Required
+No
+#### mr.include.task.counters 
+###### Description
+Whether to include task-level counters in the set of counters reported as Hadoop counters. Hadoop imposes a system-level limit (default to 120) on the number of counters, so a Gobblin MR job may easily go beyond that limit if the job has a large number of tasks and each task has a few counters. This property gives users an option to not include task-level counters to avoid going over that limit.
+###### Default Value
+False
+###### Required
+No
+# Retry Properties <a name="Retry-Properties"></a>
+Properties that control how tasks and jobs get retried on failure.
+#### workunit.retry.enabled 
+###### Description
+Whether retries of failed work units across job runs are enabled or not.
+###### Default Value
+True 
+###### Required
+No
+#### workunit.retry.policy 
+###### Description
+Work unit retry policy, can be one of {always, never, onfull, onpartial}.
+###### Default Value
+always 
+###### Required
+No
+#### task.maxretries 
+###### Description
+Maximum number of task retries. A task will be re-tried this many times before it is considered a failure.
+###### Default Value
+5 
+###### Required
+No
+#### task.retry.intervalinsec 
+###### Description
+Interval in seconds between task retries. The interval increases linearly with each retry. For example, if the first interval is 300 seconds, then the second one is 600 seconds, etc.
+###### Default Value
+300 
+###### Required
+No
+#### job.max.failures 
+###### Description
+Maximum number of failures before an alert email is triggered.
+###### Default Value
+1 
+# Task Execution Properties <a name="Task-Execution-Properties"></a>
+These properties control how tasks get executed for a job. Gobblin uses thread pools in order to executes the tasks for a specific job. In local mode there is a single thread pool per job that executes all the tasks for a job. In MR mode there is a thread pool for each map task (or container), and all Gobblin tasks assigned to that mapper are executed in that thread pool.
+#### taskexecutor.threadpool.size 
+###### Description
+Size of the thread pool used by task executor for task execution. Each task executor will spawn this many threads to execute any Tasks that is has been allocated.
+###### Default Value
+10 
+###### Required
+No
+#### tasktracker.threadpool.coresize 
+###### Description
+Core size of the thread pool used by task tracker for task state tracking and reporting.
+###### Default Value
+10 
+###### Required
+No
+#### tasktracker.threadpool.maxsize 
+###### Description
+Maximum size of the thread pool used by task tracker for task state tracking and reporting.
+###### Default Value
+10 
+###### Required
+No
+#### taskretry.threadpool.coresize 
+###### Description
+Core size of the thread pool used by the task executor for task retries.
+###### Default Value
+2 
+###### Required
+No
+#### taskretry.threadpool.maxsize 
+###### Description
+Maximum size of the thread pool used by the task executor for task retries.
+###### Default Value
+2 
+###### Required
+No
+#### task.status.reportintervalinms 
+###### Description
+Task status reporting interval in milliseconds.
+###### Default Value
+30000 
+###### Required
+No
+# State Store Properties <a name="State-Store-Properties"></a>
+#### state.store.dir
+###### Description
+Root directory where job and task state files are stored. The state-store is used by Gobblin to track state between different executions of a job. All state-store files will be written to this directory.
+###### Default Value
+None
+###### Required
+Yes
+#### state.store.fs.uri
+###### Description
+File system URI for file-system-based state stores.
+###### Default Value
+file:///
+###### Required
+No
+# Metrics Properties <a name="Metrics-Properties"></a>
+#### metrics.enabled
+###### Description
+Whether metrics collecting and reporting are enabled or not.
+###### Default Value
+True
+###### Required
+No
+#### metrics.report.interval
+###### Description
+Metrics reporting interval in milliseconds.
+###### Default Value
+60000
+###### Required
+No
+#### metrics.log.dir
+###### Description
+The directory where metric files will be written to.
+###### Default Value
+None
+###### Required
+No
+#### metrics.reporting.file.enabled
+###### Description
+A boolean indicating whether or not metrics should be reported to a file.
+###### Default Value
+True
+###### Required
+No
+#### metrics.reporting.jmx.enabled
+###### Description
+A boolean indicating whether or not metrics should be exposed via JMX.
+###### Default Value
+False
+###### Required
+No
+# Email Alert Properties <a name="Email-Alert-Properties"></a>
+#### email.alert.enabled 
+###### Description
+Whether alert emails are enabled or not. Email alerts are only sent out when jobs fail consecutively job.max.failures number of times.
+###### Default Value
+False 
+###### Required
+No
+#### email.notification.enabled 
+###### Description
+Whether job completion notification emails are enabled or not. Notification emails are sent whenever the job completes, regardless of whether it failed or not.
+###### Default Value
+False 
+###### Required
+No
+#### email.host 
+###### Description
+Host name of the email server.
+###### Default Value
+None
+###### Required
+Yes, if email notifications or alerts are enabled.
+#### email.smtp.port 
+###### Description
+SMTP port number.
+###### Default Value
+None
+###### Required
+Yes, if email notifications or alerts are enabled.
+#### email.user 
+###### Description
+User name of the sender email account.
+###### Default Value
+None
+###### Required
+No
+#### email.password 
+###### Description
+User password of the sender email account.
+###### Default Value
+None
+###### Required
+No
+#### email.from 
+###### Description
+Sender email address.
+###### Default Value
+None
+###### Required
+Yes, if email notifications or alerts are enabled.
+#### email.tos 
+###### Description
+Comma-separated list of recipient email addresses.
+###### Default Value
+None
+###### Required
+Yes, if email notifications or alerts are enabled.
+# Source Properties <a name="Source-Properties"></a>
+## Common Source Properties <a name="Common-Source-Properties"></a>
+These properties are common properties that are used among different Source implementations. Depending on what source class is being used, these parameters may or may not be necessary. These parameters are not tied to a specific source, and thus can be used in new source classes.
+#### source.class 
+###### Description
+Fully qualified name of the Source class. For example, com.linkedin.gobblin.example.wikipedia
+###### Default Value
+None
+###### Required
+Yes
+#### source.entity 
+###### Description
+Name of the source entity that needs to be pulled from the source. The parameter represents a logical grouping of data that needs to be pulled from the source. Often this logical grouping comes in the form a database table, a source topic, etc. In many situations, such as when using the QueryBasedExtractor, it will be the name of the table that needs to pulled from the source.
+###### Default Value
+None
+###### Required
+Required for QueryBasedExtractors, FileBasedExtractors.
+#### source.timezone 
+###### Description
+Timezone of the data being pulled in by the extractor. Examples include "PST" or "UTC".
+###### Default Value
+None
+###### Required
+Required for QueryBasedExtractors
+#### source.max.number.of.partitions 
+###### Description
+Maximum number of partitions to split this current run across. Only used by the QueryBasedSource and FileBasedSource.
+###### Default Value
+20 
+###### Required
+No
+#### source.skip.first.record 
+###### Description
+True if you want to skip the first record of each data partition. Only used by the FileBasedExtractor.
+###### Default Value
+False 
+###### Required
+No
+#### extract.namespace 
+###### Description
+Namespace for the extract data. The namespace will be included in the default file name of the outputted data.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.use.proxy.url 
+###### Description
+The URL of the proxy to connect to when connecting to the source. This parameter is only used for SFTP and REST sources.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.use.proxy.port 
+###### Description
+The port of the proxy to connect to when connecting to the source. This parameter is only used for SFTP and REST sources.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.username 
+###### Description
+The username to authenticate with the source. This is parameter is only used for SFTP and JDBC sources.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.password 
+###### Description
+The password to use when authenticating with the source. This is parameter is only used for JDBC sources.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.host 
+###### Description
+The name of the host to connect to.
+###### Default Value
+None
+###### Required
+Required for SftpExtractor, MySQLExtractor, and SQLServerExtractor.
+#### source.conn.rest.url 
+###### Description
+URL to connect to for REST requests. This parameter is only used for the Salesforce source.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.version 
+###### Description
+Version number of communication protocol. This parameter is only used for the Salesforce source.
+###### Default Value
+None
+###### Required
+No
+#### source.conn.timeout 
+###### Description
+The timeout set for connecting to the source in milliseconds.
+###### Default Value
+500000
+###### Required
+No
+#### source.conn.port
+###### Description
+The value of the port to connect to.
+###### Default Value
+None
+###### Required
+Required for SftpExtractor, MySQLExtractor, SqlServerExtractor.
+#### extract.table.name 
+###### Description
+Table name in Hadoop which is different table name in source.
+###### Default Value
+Source table name 
+###### Required
+No
+#### extract.is.full 
+###### Description
+True if this pull should treat the data as a full dump of table from the source, false otherwise
+###### Default Value
+False 
+###### Required
+No
+#### extract.delta.fields 
+###### Description
+List of columns that will be used as the delta field for the data.
+###### Default Value
+None
+###### Required
+No
+#### extract.primary.key.fields 
+###### Description
+List of columns that will be used as the primary key for the data.
+###### Default Value
+None
+###### Required
+No
+#### extract.pull.limit 
+###### Description
+This limits the number of records read by Gobblin. In Gobblin's extractor the readRecord() method is expected to return records until there are no more to pull, in which case it runs null. This parameter limits the number of times readRecord() is executed. This parameter is useful for pulling a limited sample of the source data for testing purposes.
+###### Default Value
+Unbounded
+###### Required
+No
+#### extract.full.run.time 
+###### Description
+
+###### Default Value
+
+###### Required
+
+## QueryBasedExtractor Properties <a name="QueryBasedExtractor-Properties"></a>
+The following table lists the query based extractor configuration properties.
+#### source.querybased.watermark.type 
+###### Description
+The format of the watermark that is used when extracting data from the source. Possible types are timestamp, date, hour, simple.
+###### Default Value
+timestamp 
+###### Required
+Yes
+#### source.querybased.start.value 
+###### Description
+Value for the watermark to start pulling data from, also the default watermark if the previous watermark cannot be found in the old task states.
+###### Default Value
+None
+###### Required
+Yes
+#### source.querybased.partition.interval 
+###### Description
+Number of hours to pull in each partition.
+###### Default Value
+1 
+###### Required
+No
+#### source.querybased.hour.column 
+###### Description
+Delta column with hour for hourly extracts (Ex: hour_sk)
+###### Default Value
+None
+###### Required
+No
+#### source.querybased.skip.high.watermark.calc 
+###### Description
+If it is true, skips high watermark calculation in the source and it will use partition higher range as high watermark instead of getting it from source.
+###### Default Value
+False 
+###### Required
+No
+#### source.querybased.query 
+###### Description
+The query that the extractor should execute to pull data.
+###### Default Value
+None
+###### Required
+No
+#### source.querybased.hourly.extract 
+###### Description
+True if hourly extract is required.
+###### Default Value
+False 
+###### Required
+No
+#### source.querybased.extract.type 
+###### Description
+"snapshot" for the incremental dimension pulls. "append_daily", "append_hourly" and "append_batch" for the append data append_batch for the data with sequence numbers as watermarks
+###### Default Value
+None
+###### Required
+No
+#### source.querybased.end.value 
+###### Description
+The high watermark which this entire job should pull up to. If this is not specified, pull entire data from the table
+###### Default Value
+None
+###### Required
+No
+#### source.querybased.append.max.watermark.limit 
+###### Description
+max limit of the high watermark for the append data.  CURRENT_DATE - X CURRENT_HOUR - X where X>=1
+###### Default Value
+CURRENT_DATE for daily extract CURRENT_HOUR for hourly extract 
+###### Required
+No
+#### source.querybased.is.watermark.override 
+###### Description
+True if this pull should override previous watermark with start.value and end.value. False otherwise.
+###### Default Value
+False 
+###### Required
+No
+#### source.querybased.low.watermark.backup.secs 
+###### Description
+Number of seconds that needs to be backup from the previous high watermark. This is to cover late data.  Ex: Set to 3600 to cover 1 hour late data.
+###### Default Value
+0 
+###### Required
+No
+#### source.querybased.schema 
+###### Description
+Database name
+###### Default Value
+None
+###### Required
+No
+#### source.querybased.is.specific.api.active 
+###### Description
+True if this pull needs to use source specific apis instead of standard protocols.  Ex: Use salesforce bulk api instead of rest api
+###### Default Value
+False 
+###### Required
+No
+#### source.querybased.skip.count.calc
+###### Description
+A boolean, if true then the QueryBasedExtractor will skip the source count calculation.
+###### Default Value
+False 
+###### Required
+No
+#### source.querybased.fetch.size
+###### Description
+This parameter is currently only used in JDBCExtractor. The JDBCExtractor will process this many number of records from the JDBC ResultSet at a time. It will then take these records and return them to the rest of the Gobblin flow so that they can get processed by the rest of the Gobblin components. 
+###### Default Value
+1000
+###### Required
+No
+#### source.querybased.is.metadata.column.check.enabled
+###### Description
+When a query is specified in the configuration file, it is possible a user accidentally adds in a column name that does not exist on the source side. By default, this parameter is set to false, which means that if a column is specified in the query and it does not exist in the source data set, Gobblin will just skip over that column. If it is set to true, Gobblin will actually take the config specified column and check to see if it exists in the source data set. If it doesn't exist then the job will fail.
+###### Default Value
+False
+###### Required
+No
+#### source.querybased.is.compression.enabled
+###### Description
+A boolean specifying whether or not compression should be enabled when pulling data from the source. This parameter is only used for MySQL sources. If set to true, the MySQL will send compressed data back to the source.
+###### Default Value
+False
+###### Required
+No
+#### source.querybased.jdbc.resultset.fetch.size
+###### Description
+The number of rows to pull through JDBC at a time. This is useful when the JDBC ResultSet is too big to fit into memory, so only "x" number of records will be fetched at a time.
+###### Default Value
+1000
+###### Required
+No
+### JdbcExtractor Properties <a name="JdbcExtractor-Properties"></a>
+The following table lists the jdbc based extractor configuration properties.
+#### source.conn.driver
+###### Description
+The fully qualified path of the JDBC driver used to connect to the external source.
+###### Default Value
+None
+###### Required
+Yes
+#### source.column.name.case
+###### Description
+A enum specifying whether or not to convert the column names to a specific case before performing a query. Possible values are TOUPPER or TOLOWER.
+###### Default Value
+NOCHANGE 
+###### Required
+No
+## FileBasedExtractor Properties <a name="FileBasedExtractor-Properties"></a>
+The following table lists the file based extractor configuration properties.
+#### source.filebased.data.directory 
+###### Description
+The data directory from which to pull data from.
+###### Default Value
+None
+###### Required
+Yes
+#### source.filebased.files.to.pull 
+###### Description
+A list of files to pull - this should be set in the Source class and the extractor will pull the specified files.
+###### Default Value
+None
+###### Required
+Yes  
+#### filebased.report.status.on.count
+###### Description
+The FileBasedExtractor will report it's status every time it processes the number of records specified by this parameter. The way it reports status is by logging out how many records it has seen.
+###### Default Value
+10000
+###### Required
+No  
+#### source.filebased.fs.uri
+###### Description
+The URI of the filesystem to connect to.
+###### Default Value
+None
+###### Required
+Required for HadoopExtractor.
+#### source.filebased.preserve.file.name
+###### Description
+A boolean, if true then the original file names will be preserved when they are are written to the source.
+###### Default Value
+False
+###### Required
+No
+#### source.schema
+###### Description
+The schema of the data that will be pulled by the source.
+###### Default Value
+None
+###### Required
+Yes
+### SftpExtractor Properties <a name="SftpExtractor-Properties"></a>
+#### source.conn.private.key 
+###### Description
+File location of the private key used for key based authentication. This parameter is only used for the SFTP source.
+###### Default Value
+None
+###### Required
+Yes
+#### source.conn.known.hosts 
+###### Description
+File location of the known hosts file used for key based authentication.
+###### Default Value
+None
+###### Required
+Yes
+# Converter Properties <a name="Converter-Properties"></a>
+Properties for Gobblin converters.
+#### converter.classes 
+###### Description
+Comma-separated list of fully qualified names of the Converter classes. The order is important as the converters will be applied in this order.
+###### Default Value
+None
+###### Required
+No
+## CsvToJsonConverter Properties <a name="CsvToJsonConverter-Properties"></a>
+This converter takes in text data separated by a delimiter (converter.csv.to.json.delimiter), and splits the data into a JSON format recognized by JsonIntermediateToAvroConverter.
+#### converter.csv.to.json.delimiter
+###### Description
+The regex delimiter between CSV based files, only necessary when using the CsvToJsonConverter - e.g. ",", "/t" or some other regex
+###### Default Value
+None
+###### Required
+Yes
+## JsonIntermediateToAvroConverter Properties <a name="JsonIntermediateToAvroConverter-Properties"></a>
+This converter takes in JSON data in a specific schema, and converts it to Avro data.
+#### converter.avro.date.format 
+###### Description
+Source format of the date columns for Avro-related converters.
+###### Default Value
+None
+###### Required
+No
+#### converter.avro.timestamp.format 
+###### Description
+Source format of the timestamp columns for Avro-related converters.
+###### Default Value
+None
+###### Required
+No
+#### converter.avro.time.format 
+###### Description
+Source format of the time columns for Avro-related converters.
+###### Default Value
+None
+###### Required
+No
+#### converter.avro.binary.charset 
+###### Description
+Source format of the time columns for Avro-related converters.
+###### Default Value
+UTF-8
+###### Required
+No
+#### converter.is.epoch.time.in.seconds
+###### Description
+A boolean specifying whether or not a epoch time field in the JSON object is in seconds or not.
+###### Default Value
+None
+###### Required
+Yes
+#### converter.avro.max.conversion.failures
+###### Description
+This converter is will fail for this many number of records before throwing an exception.
+###### Default Value
+0
+###### Required
+No
+## AvroFilterConverter Properties <a name="AvroFilterConverter-Properties"></a>
+This converter takes in an Avro record, and filters out records by performing an equality operation on the value of the field specified by converter.filter.field and the value specified in converter.filter.value. It returns the record unmodified if the equality operation evaluates to true, false otherwise.
+#### converter.filter.field
+###### Description
+The name of the field in the Avro record, for which the converter will filter records on.
+###### Default Value
+None
+###### Required
+Yes
+#### converter.filter.value
+###### Description
+The value that will be used in the equality operation to filter out records.
+###### Default Value
+None
+###### Required
+Yes
+## AvroFieldRetrieverConverter Properties <a name="AvroFieldRetrieverConverter-Properties"></a>
+This converter takes a specific field from an Avro record and returns its value.
+#### converter.avro.extractor.field.path
+###### Description
+The field in the Avro record to retrieve. If it is a nested field, then each level must be separated by a period.
+###### Default Value
+None
+###### Required
+Yes
+# Fork Properties <a name="Fork-Properties"></a>
+Properties for Gobblin's fork operator.
+#### fork.operator.class 
+###### Description
+Fully qualified name of the ForkOperator class.
+###### Default Value
+com.linkedin.uif.fork.IdentityForkOperator 
+###### Required
+No
+#### fork.branches 
+###### Description
+Number of fork branches.
+###### Default Value
+1 
+###### Required
+No
+#### fork.branch.name.${branch index} 
+###### Description
+Name of a fork branch with the given index, e.g., 0 and 1.
+###### Default Value
+fork_${branch index}, e.g., fork_0 and fork_1. 
+###### Required
+No
+# Quality Checker Properties <a name="Quality-Checker-Properties"></a>
+#### qualitychecker.task.policies 
+###### Description
+Comma-separted list of fully qualified names of the TaskLevelPolicy classes that will run at the end of each Task.
+###### Default Value
+None
+###### Required
+No
+#### qualitychecker.task.policy.types 
+###### Description
+OPTIONAL implies the corresponding class in qualitychecker.task.policies is optional and if it fails the Task will still succeed, FAIL implies that if the corresponding class fails then the Task will fail too.
+###### Default Value
+OPTIONAL 
+###### Required
+No
+#### qualitychecker.row.policies 
+###### Description
+Comma-separted list of fully qualified names of the RowLevelPolicy classes that will run on each record.
+###### Default Value
+None
+###### Required
+No
+#### qualitychecker.row.policy.types 
+###### Description
+OPTIONAL implies the corresponding class in qualitychecker.row.policies is optional and if it fails the Task will still succeed, FAIL implies that if the corresponding class fails then the Task will fail too, ERR_FILE implies that if the record does not pass the test then the record will be written to an error file.
+###### Default Value
+OPTIONAL 
+###### Required
+No
+#### qualitychecker.row.err.file 
+###### Description
+The quality checker will write the current record to the location specified by this parameter, if the current record fails to pass the quality checkers specified by qualitychecker.row.policies; this file will only be written to if the quality checker policy type is ERR_FILE.
+###### Default Value
+None
+###### Required
+No
+# Writer Properties <a name="Writer-Properties"></a>
+#### writer.destination.type 
+###### Description
+Writer destination type; currently only writing to HDFS is supported.
+###### Default Value
+HDFS 
+###### Required
+No
+#### writer.output.format 
+###### Description
+Writer output format; currently only Avro is supported.
+###### Default Value
+AVRO 
+###### Required
+No
+#### writer.fs.uri 
+###### Description
+File system URI for writer output.
+###### Default Value
+file:/// 
+###### Required
+No
+#### writer.staging.dir 
+###### Description
+Staging directory of writer output. All staging data that the writer produces will be placed in this directory, but all the data will be eventually moved to the writer.output.dir.
+###### Default Value
+None
+###### Required
+Yes
+#### writer.output.dir 
+###### Description
+Output directory of writer output. All output data that the writer produces will be placed in this directory, but all the data will be eventually moved to the final directory by the publisher.
+###### Default Value
+None
+###### Required
+Yes
+#### writer.builder.class 
+###### Description
+Fully qualified name of the writer builder class.
+###### Default Value
+com.linkedin.uif.writer.AvroDataWriterBuilder
+###### Required
+No
+#### writer.file.path 
+###### Description
+The Path where the writer will write it's data. Data in this directory will be copied to it's final output directory by the DataPublisher.
+###### Default Value
+None
+###### Required
+Yes
+#### writer.file.name 
+###### Description
+The name of the file the writer writes to.
+###### Default Value
+part 
+###### Required
+Yes
+
+#### writer.partitioner.class
+###### Description
+Partitioner used for distributing records into multiple output files. `writer.builder.class` must be a subclass of `PartitionAwareDataWriterBuilder`, otherwise Gobblin will throw an error. 
+###### Default Value
+None (will not use partitioner)
+###### Required
+No
+
+#### writer.buffer.size 
+###### Description
+Writer buffer size in bytes. This parameter is only applicable for the AvroHdfsDataWriter.
+###### Default Value
+4096 
+###### Required
+No
+#### writer.deflate.level 
+###### Description
+Writer deflate level. Deflate is a type of compression for Avro data.
+###### Default Value
+9 
+###### Required
+No
+#### writer.codec.type 
+###### Description
+This is used to specify the type of compression used when writing data out. Possible values are NOCOMPRESSION, DEFLATE, SNAPPY.
+###### Default Value
+DEFLATE 
+###### Required
+No
+#### writer.eager.initialization
+###### Description
+This is used to control the writer creation. If the value is set to true, writer is created before records are read. This means an empty file will be created even if no records were read.
+###### Default Value
+False 
+###### Required
+No
+# Data Publisher Properties <a name="Data-Publisher-Properties"></a>
+#### data.publisher.type 
+###### Description
+The fully qualified name of the DataPublisher class to run. The DataPublisher is responsible for publishing task data once all Tasks have been completed.
+###### Default Value
+None
+###### Required
+Yes
+#### data.publisher.final.dir 
+###### Description
+The final output directory where the data should be published.
+###### Default Value
+None
+###### Required
+Yes
+#### data.publisher.replace.final.dir
+###### Description
+A boolean, if true and the the final output directory already exists, then the data will not be committed. If false and the final output directory already exists then it will be overwritten.
+###### Default Value
+None
+###### Required
+Yes
+#### data.publisher.final.name
+###### Description
+The final name of the file that is produced by Gobblin. By default, Gobblin already assigns a unique name to each file it produces. If that default name needs to be overridden then this parameter can be used. Typically, this parameter should be set on a per workunit basis so that file names don't collide.
+###### Default Value
+
+###### Required
+No
+# Generic Properties <a name="Generic-Properties"></a>
+These properties are used throughout multiple Gobblin components.
+#### fs.uri
+###### Description
+Default file system URI for all file storage; over-writable by more specific configuration properties.
+###### Default Value
+file:///
+###### Required
+No
\ No newline at end of file
diff --git a/gobblin-docs/developer-guide/Customization-for-Converter-and-Operator.md b/Customization-for-Converter-and-Operator.md
similarity index 100%
rename from gobblin-docs/developer-guide/Customization-for-Converter-and-Operator.md
rename to Customization-for-Converter-and-Operator.md
diff --git a/gobblin-docs/developer-guide/Customization-for-New-Source.md b/Customization-for-New-Source.md
similarity index 100%
rename from gobblin-docs/developer-guide/Customization-for-New-Source.md
rename to Customization-for-New-Source.md
diff --git a/Exactly-Once-Support.md b/Exactly-Once-Support.md
new file mode 100644
index 0000000..612bbeb
--- /dev/null
+++ b/Exactly-Once-Support.md
@@ -0,0 +1,168 @@
+This page outlines the design for exactly-once support in Gobblin. 
+
+Currently the flow of publishing data in Gobblin is:
+
+1. DataWriter writes to staging folder 
+2. DataWriter moves files from staging folder to task output folder
+3. Publisher moves files from task output folder to job output folder
+4. Persists checkpoints (watermarks) to state store
+5. Delete staging folder and task-output folder.
+
+This flow does not theoretically guarantee exactly-once delivery, rather, it guarantess at least once. Because if something bad happens in step 4, or between steps 3 and 4, it is possible that data is published but checkpoints are not, and the next run will re-extract and re-publish those records.
+
+To guarantee exactly-once, steps 3 & 4 should be atomic.
+
+## Achieving Exactly-Once Delivery with `CommitStepStore`
+
+The idea is similar as write-head logging. Before doing the atomic steps (i.e., steps 3 & 4), first write all these steps (referred to as `CommitStep`s) into a `CommitStepStore`. In this way, if failure happens during the atomic steps, the next run can continue doing the rest of the steps before ingesting more data for this dataset.
+
+**Example**: Suppose we have a Kafka-HDFS ingestion job, where each Kafka topic is a dataset. Suppose a task generates three output files for topic 'MyTopic':
+
+```
+task-output/MyTopic/2015-12-09/1.avro
+task-output/MyTopic/2015-12-09/2.avro
+task-output/MyTopic/2015-12-10/1.avro
+```
+
+which should be published to
+```
+job-output/MyTopic/2015-12-09/1.avro
+job-output/MyTopic/2015-12-09/2.avro
+job-output/MyTopic/2015-12-10/1.avro
+```
+
+And suppose this topic has two partitions, and the their checkpoints, i.e., the actual high watermarks are `offset=100` and `offset=200`.
+
+In this case, there will be 5 CommitSteps for this dataset:
+
+1. `FsRenameCommitStep`: rename `task-output/MyTopic/2015-12-09/1.avro` to `job-output/MyTopic/2015-12-09/1.avro`
+2. `FsRenameCommitStep`: rename `task-output/MyTopic/2015-12-09/2.avro` to `job-output/MyTopic/2015-12-09/2.avro`
+3. `FsRenameCommitStep`: rename `task-output/MyTopic/2015-12-10/1.avro` to `job-output/MyTopic/2015-12-10/1.avro`
+4. `HighWatermarkCommitStep`: set the high watermark for partition `MyTopic:0 = 100`
+5. `HighWatermarkCommtiStep`: set the high watermark for partition `MyTopic:1 = 200`
+
+If all these `CommitStep`s are successful, we can proceed with deleting task-output folder and deleting the above `CommitStep`s from the `CommitStepStore`. If any of these steps fails, these steps will not be deleted. When the next run starts, for each dataset, it will check whether there are `CommitStep`s for this dataset in the CommitStepStore. If there are, it means the previous run may not have successfully executed some of these steps, so it will verify whether each step has been done, and re-do the step if not. If the re-do fails for a certain number of times, this dataset will be skipped. Thus the `CommitStep` interface will have two methods: `verify()` and `execute()`.
+
+## Scalability
+
+The above approach potentially affects scalability for two reasons:
+
+1. The driver needs to write all `CommitStep`s to the `CommitStepStore` for each dataset, once it determines that all tasks for the dataset have finished. This may cause scalability issues if there are too many `CommitStep`s, too many datasets, or too many tasks.
+2. Upon the start of the next run, the driver needs to verify all `CommitStep`s and redo the `CommitStep`s that the previous run failed to do. This may also cause scalability issues if there are too many `CommitStep`s.
+
+Both issues can be resolved by moving the majority of the work to containers, rather than doing it in the driver. 
+
+For #1, we can make each container responsible for writing `CommitStep`s for a subset of the datasets. Each container will keep polling the `TaskStateStore` to determine whether all tasks for each dataset that it is responsible for have finished, and if so, it writes `CommitStep`s for this dataset to the `CommitStepStore`.
+
+ #2 can also easily be parallelized where we have each container responsible for a subset of datasets.
+
+## APIs
+
+**CommitStep**:
+``` java
+/**
+ * A step during committing in a Gobblin job that should be atomically executed with other steps.
+ */
+public abstract class CommitStep {
+
+  private static final Gson GSON = new Gson();
+
+  public static abstract class Builder<T extends Builder<?>> {
+  }
+
+  protected CommitStep(Builder<?> builder) {
+  }
+
+  /**
+   * Verify whether the CommitStep has been done.
+   */
+  public abstract boolean verify() throws IOException;
+
+  /**
+   * Execute a CommitStep.
+   */
+  public abstract boolean execute() throws IOException;
+
+  public static CommitStep get(String json, Class<? extends CommitStep> clazz) throws IOException {
+    return GSON.fromJson(json, clazz);
+  }
+}
+```
+
+**CommitSequence**:
+``` java
+@Slf4j
+public class CommitSequence {
+  private final String storeName;
+  private final String datasetUrn;
+  private final List<CommitStep> steps;
+  private final CommitStepStore commitStepStore;
+
+  public CommitSequence(String storeName, String datasetUrn, List<CommitStep> steps, CommitStepStore commitStepStore) {
+    this.storeName = storeName;
+    this.datasetUrn = datasetUrn;
+    this.steps = steps;
+    this.commitStepStore = commitStepStore;
+  }
+
+  public boolean commit() {
+    try {
+      for (CommitStep step : this.steps) {
+        if (!step.verify()) {
+          step.execute();
+        }
+      }
+      this.commitStepStore.remove(this.storeName, this.datasetUrn);
+      return true;
+    } catch (Throwable t) {
+      log.error("Commit failed for dataset " + this.datasetUrn, t);
+      return false;
+    }
+  }
+}
+```
+
+**CommitStepStore**:
+``` java
+/**
+ * A store for {@link CommitStep}s.
+ */
+public interface CommitStepStore {
+
+  /**
+   * Create a store with the given name.
+   */
+  public boolean create(String storeName) throws IOException;
+
+  /**
+   * Create a new dataset URN in a store.
+   */
+  public boolean create(String storeName, String datasetUrn) throws IOException;
+
+  /**
+   * Whether a dataset URN exists in a store.
+   */
+  public boolean exists(String storeName, String datasetUrn) throws IOException;
+
+  /**
+   * Remove a given store.
+   */
+  public boolean remove(String storeName) throws IOException;
+
+  /**
+   * Remove all {@link CommitStep}s for the given dataset URN from the store.
+   */
+  public boolean remove(String storeName, String datasetUrn) throws IOException;
+
+  /**
+   * Put a {@link CommitStep} with the given dataset URN into the store.
+   */
+  public boolean put(String storeName, String datasetUrn, CommitStep step) throws IOException;
+
+  /**
+   * Get the {@link CommitSequence} associated with the given dataset URN in the store.
+   */
+  public CommitSequence getCommitSequence(String storeName, String datasetUrn) throws IOException;
+
+}
+```
\ No newline at end of file
diff --git a/Existing-Reporters.md b/Existing-Reporters.md
new file mode 100644
index 0000000..2bfe4a9
--- /dev/null
+++ b/Existing-Reporters.md
@@ -0,0 +1,15 @@
+Metric Reporters
+================
+
+* [Output Stream Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/OutputStreamReporter.java): allows printing metrics to any OutputStream, including STDOUT and files.
+* [Kafka Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/kafka/KafkaReporter.java): emits metrics to Kafka topic as Json messages.
+* [Kafka Avro Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/kafka/KafkaAvroReporter.java): emits metrics to Kafka topic as Avro messages with schema [MetricReport](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/avro/MetricReport.avsc).
+* [Graphite Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/graphite/GraphiteReporter.java): emits metrics to Graphite. This reporter has a different, deprecated construction API included in its javadoc.
+* [Influx DB Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/influxdb/InfluxDBReporter.java): emits metrics to Influx DB. This reporter has a different, deprecated construction API included in its javadoc.
+* [Hadoop Counter Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/hadoop/HadoopCounterReporter.java): emits metrics as Hadoop counters at the end of the execution. Available for old and new Hadoop API. This reporter has a different, deprecated construction API included in its javadoc. Due to limits on the number of Hadoop counters that can be created, this reporter is not recommended except for applications with very few metrics.
+
+Event Reporters
+===============
+* [Output Stream Event Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/OutputStreamEventReporter.java): Emits events to any output stream, including STDOUT and files.
+* [Kafka Event Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/kafka/KafkaEventReporter.java): Emits events to Kafka topic as Json messages.
+* [Kafka Avro Event Reporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/kafka/KafkaAvroEventReporter.java): Emits events to Kafka topic as Avro messages using the schema [GobblinTrackingEvent](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/avro/GobblinTrackingEvent.avsc).
\ No newline at end of file
diff --git a/FAQs.md b/FAQs.md
new file mode 100644
index 0000000..73fa6af
--- /dev/null
+++ b/FAQs.md
@@ -0,0 +1,98 @@
+Table of Contents
+---------------------------------------
+- [Gobblin](#gobblin)
+  - [General Questions <a name="General-Questions"></a>](#general-questions-a-namegeneral-questionsa)
+        - [What is Gobblin?](#what-is-gobblin)
+        - [What programming languages does Gobblin support?](#what-programming-languages-does-gobblin-support)
+        - [Does Gobblin require any external software to be installed?](#does-gobblin-require-any-external-software-to-be-installed)
+        - [What Hadoop version can Gobblin run on?](#what-hadoop-version-can-gobblin-run-on)
+        - [How do I run and schedule a Gobblin job?](#how-do-i-run-and-schedule-a-gobblin-job)
+        - [How is Gobblin different from Sqoop?](#how-is-gobblin-different-from-sqoop)
+  - [Technical Questions <a name="Technical-Questions"></a>](#technical-questions-a-nametechnical-questionsa)
+        - [When running on Hadoop, each map task quickly reaches 100 Percent completion, but then stalls for a long time. Why does this happen?](#when-running-on-hadoop-each-map-task-quickly-reaches-100-percent-completion-but-then-stalls-for-a-long-time-why-does-this-happen)
+        - [Why does Gobblin on Hadoop stall for a long time between adding files to the DistrbutedCache, and launching the actual job?](#why-does-gobblin-on-hadoop-stall-for-a-long-time-between-adding-files-to-the-distrbutedcache-and-launching-the-actual-job)
+        - [How do I fix `UnsupportedFileSystemException: No AbstractFileSystem for scheme: null`?](#how-do-i-fix-unsupportedfilesystemexception-no-abstractfilesystem-for-scheme-null)
+        - [How do I compile Gobblin against CDH?](#how-do-i-compile-gobblin-against-cdh)
+        - [Resolve Gobblin-on-MR Exception `IOException: Not all tasks running in mapper attempt_id completed successfully`](#resolve-gobblin-on-mr-exception-ioexception-not-all-tasks-running-in-mapper-attempt_id-completed-successfully)
+        - [Gradle Build Fails With `Cannot invoke method getURLs on null object`](#gradle-build-fails-with-cannot-invoke-method-geturls-on-null-object)
+- [Gradle](#gradle)
+  - [Technical Questions](#technical-questions)
+      - [How do I add a new external dependency?](#how-do-i-add-a-new-external-dependency)
+      - [How do I add a new Maven Repository to pull artifacts from?](#how-do-i-add-a-new-maven-repository-to-pull-artifacts-from)
+
+# Gobblin
+
+## General Questions <a name="General-Questions"></a>
+
+##### What is Gobblin?
+
+Gobblin is a universal ingestion framework. It's goal is to pull data from any source into an arbitrary data store. One major use case for Gobblin is pulling data into Hadoop. Gobblin can pull data from file systems, SQL stores, and data that is exposed by a REST API. See the Gobblin [Home](https://github.com/linkedin/gobblin/wiki) page for more information.
+
+##### What programming languages does Gobblin support?
+
+Gobblin currently only supports Java 6 and up.
+
+##### Does Gobblin require any external software to be installed?
+
+The machine that Gobblin is built on must have Java installed, and the `$JAVA_HOME` environment variable must be set.
+
+##### What Hadoop version can Gobblin run on?
+
+Gobblin can run on both Hadoop 1.x and Hadoop 2.x. By default, Gobblin compiles against Hadoop 1.2.1, and can compiled against Hadoop 2.3.0 by running `./gradlew -PuseHadoop2 clean build`.
+
+##### How do I run and schedule a Gobblin job?
+
+Check out the [Deployment](Gobblin Deployment) page for information on how to run and schedule Gobblin jobs. Check out the [Configuration](Configuration Properties Glossary) page for information on how to set proper configuration properties for a job.
+
+##### How is Gobblin different from Sqoop?
+
+Sqoop main focus bulk import and export of data from relational databases to HDFS, it lacks the ETL functionality of data cleansing, data transformation, and data quality checks that Gobblin provides. Gobblin is also capable of pulling from any data source (e.g. file systems, RDMS, REST APIs).
+
+## Technical Questions <a name="Technical-Questions"></a>
+
+##### When running on Hadoop, each map task quickly reaches 100 Percent completion, but then stalls for a long time. Why does this happen?
+
+Gobblin currently uses Hadoop map tasks as a container for running Gobblin tasks. Each map task runs 1 or more Gobblin workunits, and the progress of each workunit is not hooked into the progress of each map task. Even though the Hadoop job reports 100% completion, Gobblin is still doing work. See the [Gobblin Deployment](Gobblin Deployment) page for more information.
+
+##### Why does Gobblin on Hadoop stall for a long time between adding files to the DistrbutedCache, and launching the actual job?
+
+Gobblin takes all WorkUnits created by the Source class and serializes each one into a file on Hadoop. These files are read by each map task, and are deserialized into Gobblin Tasks. These Tasks are then run by the map-task. The reason the job stalls is that Gobblin is writing all these files to HDFS, which can take a while especially if there are a lot of tasks to run. See the [Gobblin Deployment](Gobblin Deployment) page for more information.
+
+##### How do I fix `UnsupportedFileSystemException: No AbstractFileSystem for scheme: null`?
+
+This error typically occurs due to Hadoop version conflict issues. If Gobblin is compiled against a specific Hadoop version, but then deployed on a different Hadoop version or installation, this error may be thrown. For example, if you simply compile Gobblin using `./gradlew clean build -PuseHadoop2`, but deploy Gobblin to a cluster with [CDH](https://www.cloudera.com/content/www/en-us/products/apache-hadoop/key-cdh-components.html) installed, you may hit this error.
+
+It is important to realize that the the `gobblin-dist.tar.gz` file produced by `./gradlew clean build` will include all the Hadoop jar dependencies; and if one follows the [MR deployment guide](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Hadoop-MapReduce-Deployment), Gobblin will be launched with these dependencies on the classpath.
+
+To fix this take the following steps:
+
+* Delete all the Hadoop jars from the Gobblin `lib` folder
+* Ensure that the environment variable `HADOOP_CLASSPATH` is set and points to a directory containing the Hadoop libraries for the cluster
+
+##### How do I compile Gobblin against CDH?
+
+[Cloudera Distributed Hadoop](https://www.cloudera.com/content/www/en-us/products/apache-hadoop/key-cdh-components.html) (often abbreviated as CDH) is a popular Hadoop distribution. Typically, when running Gobblin on a CDH cluster it is recommended that one also compile Gobblin against the same CDH version. Not doing so may cause unexpected runtime behavior. To compile against a specific CDH version simply use the `hadoopVersion` parameter. For example, to compile against version `2.5.0-cdh5.3.0` run `./gradlew clean build -PuseHadoop2 -PhadoopVersion=2.5.0-cdh5.3.0`.
+
+##### Resolve Gobblin-on-MR Exception `IOException: Not all tasks running in mapper attempt_id completed successfully`
+
+This exception usually just means that a Hadoop Map Task running Gobblin Tasks threw some exception. Unfortunately, the exception isn't truly indicative of the underlying problem, all it is really saying is that something went wrong in the Gobblin Task. Each Hadoop Map Task has its own log file and it is often easiest to look at the logs of the Map Task when debugging this problem. There are multiple ways to do this, but one of the easiest ways is to execute `yarn logs -applicationId <application ID> [OPTIONS]`
+
+##### Gradle Build Fails With `Cannot invoke method getURLs on null object`
+
+Add `-x test` to build the project without running the tests; this will make the exception go away. If one needs to run the tests then make sure [Java Cryptography Extension](https://en.wikipedia.org/wiki/Java_Cryptography_Extension) is installed.
+
+# Gradle
+
+## Technical Questions
+
+#### How do I add a new external dependency?
+
+Say I want to add [`oozie-core-4.2.0.jar`](http://mvnrepository.com/artifact/org.apache.oozie/oozie-core/4.2.0) as a dependency to the `gobblin-scheduler` subproject. I would first open the file `build.gradle` and add the following entry to the `ext.externalDependency` array: `"oozieCore": "org.apache.oozie:oozie-core:4.2.0"`.
+
+Then in the `gobblin-scheduler/build.gradle` file I would add the following line to the dependency block: `compile externalDependency.oozieCore`.
+
+#### How do I add a new Maven Repository to pull artifacts from?
+
+Often times, one may have important artifacts stored in a local or private Maven repository. As of 01/21/2016 Gobblin only pulls artifacts from the following Maven Repositories: [Maven Central](http://repo1.maven.org/maven/), [Conjars](http://conjars.org/repo), and [Cloudera](https://repository.cloudera.com/artifactory/cloudera-repos/).
+
+In order to add another Maven Repository modify the `defaultEnvironment.gradle` file and the new repository using the same pattern as the existing ones.
\ No newline at end of file
diff --git a/Feature-List.mediawiki b/Feature-List.mediawiki
new file mode 100644
index 0000000..c9966cb
--- /dev/null
+++ b/Feature-List.mediawiki
@@ -0,0 +1,33 @@
+Currently, Gobblin supports the following feature list:
+
+
+* Different Data Sources
+{|
+!Source Type 
+!Protocol API
+!Vendors
+|- valign="middle"
+|RDBMS
+|JDBC
+|MySQL/SQLServer
+|-valign="middle"
+|Files
+|HDFS/SFTP/LocalFS
+|N/A
+|-
+|Salesforce
+|REST
+|Salesforce
+|}
+<BR>
+* Different Pulling Types
+** SNAPSHOT-ONLY: Pull the snapshot of one dataset.
+** SNAPSHOT-APPEND: Pull delta changes since last run, optionally merge delta changes into snapshot (Delta changes include updates to the dataset since last run).
+** APPEND-ONLY: Pull delta changes since last run, and append to dataset.
+<BR>
+* Different Deployment Types
+** standalone deploy on a single machine
+** cluster deploy on hadoop 1.2.1, hadoop 2.3.0
+<BR>
+* Compaction
+**Merge delta changes into snapshot.
\ No newline at end of file
diff --git a/Getting-Started.md b/Getting-Started.md
new file mode 100644
index 0000000..7a1f0f3
--- /dev/null
+++ b/Getting-Started.md
@@ -0,0 +1,116 @@
+This page will guide you to set up Gobblin, and run a quick and simple first job. Currently, Gobblin requires JDK 7 and later to compile and run.
+
+# Download and Build
+
+* Checkout Gobblin:
+
+```bash
+git clone https://github.com/linkedin/gobblin.git
+```
+
+* Build Gobblin: Gobblin is built using Gradle.
+
+```bash
+cd gobblin
+./gradlew clean build
+```
+
+To build against Hadoop 2, add `-PuseHadoop2`. To skip unit tests, add `-x test`.
+
+# Run Your First Job
+
+Here we illustrate how to run a simple job. This job will pull the five latest revisions of each of the four Wikipedia pages: NASA, Linkedin, Parris_Cues and Barbara_Corcoran. A total of 20 records, each corresponding to one revision, should be pulled if the job is successfully run. The records will be stored as Avro files.
+
+Gobblin can run either in standalone mode or on MapReduce. In this example we will run Gobblin in standalone mode.
+
+This page explains how to run the job from the terminal. You may also run this job from your favorite IDE (IntelliJ is recommended).
+
+## Preliminary 
+
+Each Gobblin job minimally involves several constructs, e.g. [Source](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/Source.java), [Extractor](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/extractor/Extractor.java), [DataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/writer/DataWriter.java) and [DataPublisher] (https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/publisher/DataPublisher.java). As the names suggest, Source defines the source to pull data from, Extractor implements the logic to extract data records, DataWriter defines the way the extracted records are output, and DataPublisher publishes the data to the final output location. A job may optionally have one or more Converters, which transform the extracted records, as well as one or more PolicyCheckers that check the quality of the extracted records and determine whether they conform to certain policies.
+
+Some of the classes relevant to this example include [WikipediaSource](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaSource.java), [WikipediaExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaExtractor.java), [WikipediaConverter](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaConverter.java), [AvroHdfsDataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/writer/AvroHdfsDataWriter.java) and [BaseDataPublisher] (https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/publisher/BaseDataPublisher.java).
+
+To run Gobblin in standalone mode we need a Gobblin configuration file (such as uses [gobblin-standalone.properties](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-standalone.properties)). And for each job we wish to run, we also need a job configuration file (such as [wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull)). The Gobblin configuration file, which is passed to Gobblin as a command line argument, should contain a property `jobconf.dir` which specifies where the job configuration files are located. By default, `jobconf.dir` points to environment variable `GOBBLIN_JOB_CONFIG_DIR`. Each file in `jobconf.dir` with extension `.job` or `.pull` is considered a job configuration file, and Gobblin will launch a job for each such file. For more information on Gobblin deployment in standalone mode, refer to the [Standalone Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment) page.
+
+A list of commonly used configuration properties can be found here: [Configuration Properties Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary).
+
+## Steps
+
+* Create a folder to store the job configuration file. Put [wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull) in this folder, and set environment variable `GOBBLIN_JOB_CONFIG_DIR` to point to this folder. Also, make sure that the environment variable `JAVA_HOME` is set correctly.
+
+* Create a folder as Gobblin's working directory. Gobblin will write job output as well as other information there, such as locks and state-store (for more information, see the [Standalone Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment) page). Set environment variable `GOBBLIN_WORK_DIR` to point to that folder.  
+<!---stakiar can we list all the folders under gobblin-dist and explain what each folder means -->
+* Unpack Gobblin distribution:
+
+```bash
+tar -zxvf gobblin-dist-[project-version].tar.gz
+cd gobblin-dist
+```
+* Launch Gobblin:
+
+```bash
+bin/gobblin-standalone.sh start
+```
+
+This script will launch Gobblin and pass the Gobblin configuration file ([gobblin-standalone.properties](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-standalone.properties)) as an argument.
+
+The job log, which contains the progress and status of the job, will be written into `logs/gobblin-current.log` (to change where the log is written, modify the Log4j configuration file `conf/log4j-standalone.xml`). Stdout will be written into `nohup.out`.
+
+Among the job logs there should be the following information:
+
+```
+INFO JobScheduler - Loaded 1 job configuration
+INFO  AbstractJobLauncher - Starting job job_PullFromWikipedia_1422040355678
+INFO  TaskExecutor - Starting the task executor
+INFO  LocalTaskStateTracker2 - Starting the local task state tracker
+INFO  AbstractJobLauncher - Submitting task task_PullFromWikipedia_1422040355678_0 to run
+INFO  TaskExecutor - Submitting task task_PullFromWikipedia_1422040355678_0
+INFO  AbstractJobLauncher - Waiting for submitted tasks of job job_PullFromWikipedia_1422040355678 to complete... to complete...
+INFO  AbstractJobLauncher - 1 out of 1 tasks of job job_PullFromWikipedia_1422040355678 are running
+INFO  WikipediaExtractor - 5 record(s) retrieved for title NASA
+INFO  WikipediaExtractor - 5 record(s) retrieved for title LinkedIn
+INFO  WikipediaExtractor - 5 record(s) retrieved for title Parris_Cues
+INFO  WikipediaExtractor - 5 record(s) retrieved for title Barbara_Corcoran
+INFO  Task - Extracted 20 data records
+INFO  Fork-0 - Committing data of branch 0 of task task_PullFromWikipedia_1422040355678_0
+INFO  LocalTaskStateTracker2 - Task task_PullFromWikipedia_1422040355678_0 completed in 2334ms with state SUCCESSFUL
+INFO  AbstractJobLauncher - All tasks of job job_PullFromWikipedia_1422040355678 have completed
+INFO  TaskExecutor - Stopping the task executor 
+INFO  LocalTaskStateTracker2 - Stopping the local task state tracker
+INFO  AbstractJobLauncher - Publishing job data of job job_PullFromWikipedia_1422040355678 with commit policy COMMIT_ON_FULL_SUCCESS
+INFO  AbstractJobLauncher - Persisting job/task states of job job_PullFromWikipedia_1422040355678
+```
+
+* After the job is done, stop Gobblin by running
+
+```bash
+bin/gobblin-standalone.sh stop
+```
+
+The job output is written in `GOBBLIN_WORK_DIR/job-output` folder as an Avro file.
+
+To see the content of the job output, use the Avro tools to convert Avro to JSON. Download the latest version of Avro tools (e.g. avro-tools-1.7.7.jar):
+
+```bash
+curl -O http://central.maven.org/maven2/org/apache/avro/avro-tools/1.7.7/avro-tools-1.7.7.jar
+```
+
+and run 
+
+```bash
+java -jar avro-tools-1.7.7.jar tojson --pretty [job_output].avro > output.json
+```
+
+`output.json` will contain all retrieved records in JSON format.
+
+Note that since this job configuration file we used ([wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull)) doesn't specify a job schedule, the job will run immediately and will run only once. To schedule a job to run at a certain time and/or repeatedly, set the `job.schedule` property with a cron-based syntax. For example, `job.schedule=0 0/2 * * * ?` will run the job every two minutes. See [this link](http://www.quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger) (Quartz CronTrigger) for more details.
+
+
+# Other Example Jobs
+
+Besides the Wikipedia example, we have another example job [SimpleJson](https://github.com/linkedin/gobblin/tree/master/gobblin-example/src/main/java/gobblin/example/simplejson), which extracts records from JSON files and store them in Avro files.
+
+To create your own jobs, simply implement the relevant interfaces such as [Source](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/Source.java), [Extractor](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/extractor/Extractor.java), [Converter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/converter/Converter.java) and [DataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/writer/DataWriter.java). In the job configuration file, set properties such as `source.class` and `converter.class` to point to these classes.
+
+On a side note: while users are free to directly implement the Extractor interface (e.g., WikipediaExtractor), Gobblin also provides several extractor implementations based on commonly used protocols, e.g., [RestApiExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/restapi/RestApiExtractor.java), [JdbcExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/jdbc/JdbcExtractor.java), [SftpExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/sftp/SftpExtractor.java), etc. Users are encouraged to extend these classes to take advantage of existing implementations.
\ No newline at end of file
diff --git a/Getting-Startedhttps:--github.com-linkedin-gobblin-wiki-_new.md b/Getting-Startedhttps:--github.com-linkedin-gobblin-wiki-_new.md
new file mode 100644
index 0000000..ee33a54
--- /dev/null
+++ b/Getting-Startedhttps:--github.com-linkedin-gobblin-wiki-_new.md
@@ -0,0 +1,106 @@
+This page will guide you to set up Gobblin, and run a quick and simple first job.
+
+# Download and Build
+
+* Checkout Gobblin:
+
+`git clone https://github.com/linkedin/gobblin.git`
+
+* Build Gobblin: Gobblin is built using Gradle.
+
+```
+cd gobblin
+./gradlew clean build
+```
+
+To build against Hadoop 2, add `-PuseHadoop2`. To skip unit tests, add `-x test`.
+
+# Run Your First Job
+
+Here we illustrate how to run a simple job. This job will pull the five latest revisions of each of the four Wikipedia pages: NASA, Linkedin, Parris_Cues and Barbara_Corcoran. A total of 20 records, each corresponding to one revision, should be pulled if the job is successfully run. The records will be stored as Avro files.
+
+Gobblin can run either in standalone mode or on MapReduce. In this example we will run Gobblin in standalone mode.
+
+This page explains how to run the job from the terminal. You may also run this job from your favorite IDE (IntelliJ is recommended).
+
+## Preliminary 
+
+Each Gobblin job minimally involves several constructs, e.g. [Source](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/Source.java), [Extractor](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/extractor/Extractor.java), [DataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/writer/DataWriter.java) and [DataPublisher] (https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/publisher/DataPublisher.java). As the names suggest, Source defines the source to pull data from, Extractor implements the logic to extract data records, DataWriter defines the way the extracted records are output, and DataPublisher publishes the data to the final output location. A job may optionally have one or more Converters, which transform the extracted records, as well as one or more PolicyCheckers that check the quality of the extracted records and determine whether they conform to certain policies.
+
+Some of the classes relevant to this example include [WikipediaSource](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaSource.java), [WikipediaExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaExtractor.java), [WikipediaConverter](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/java/gobblin/example/wikipedia/WikipediaConverter.java), [AvroHdfsDataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/writer/AvroHdfsDataWriter.java) and [BaseDataPublisher] (https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/publisher/BaseDataPublisher.java).
+
+To run Gobblin in standalone mode we need a Gobblin configuration file (such as uses [gobblin-standalone.properties](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-standalone.properties)). And for each job we wish to run, we also need a job configuration file (such as [wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull)). The Gobblin configuration file, which is passed to Gobblin as a command line argument, should contain a property `jobconf.dir` which specifies where the job configuration files are located. By default, `jobconf.dir` points to environment variable `GOBBLIN_JOB_CONFIG_DIR`. Each file in `jobconf.dir` with extension `.job` or `.pull` is considered a job configuration file, and Gobblin will launch a job for each such file. For more information on Gobblin deployment in standalone mode, refer to the [Standalone Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment) page.
+
+A list of commonly used configuration properties can be found here: [Configuration Properties Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary).
+
+## Steps
+
+* Create a folder to store the job configuration file. Put [wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull) in this folder, and set environment variable `GOBBLIN_JOB_CONFIG_DIR` to point to this folder. Also, make sure that the environment variable `JAVA_HOME` is set correctly.
+
+* Create a folder as Gobblin's working directory. Gobblin will write job output as well as other information there, such as locks and state-store (for more information, see the [Standalone Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment) page). Set environment variable `GOBBLIN_WORK_DIR` to point to that folder.  
+<!---stakiar can we list all the folders under gobblin-dist and explain what each folder means -->
+* Unpack Gobblin distribution:
+```
+tar -zxvf gobblin-dist.tar.gz
+cd gobblin-dist
+```
+* Launch Gobblin:
+
+`bin/gobblin-standalone.sh start`
+
+This script will launch Gobblin and pass the Gobblin configuration file ([gobblin-standalone.properties](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-standalone.properties)) as an argument.
+
+The job log, which contains the progress and status of the job, will be written into `logs/gobblin-current.log` (to change where the log is written, modify the Log4j configuration file `conf/log4j-standalone.xml`). Stdout will be written into `nohup.out`.
+
+Among the job logs there should be the following information:
+
+```
+INFO JobScheduler - Loaded 1 job configuration
+INFO  AbstractJobLauncher - Starting job job_PullFromWikipedia_1422040355678
+INFO  TaskExecutor - Starting the task executor
+INFO  LocalTaskStateTracker2 - Starting the local task state tracker
+INFO  AbstractJobLauncher - Submitting task task_PullFromWikipedia_1422040355678_0 to run
+INFO  TaskExecutor - Submitting task task_PullFromWikipedia_1422040355678_0
+INFO  AbstractJobLauncher - Waiting for submitted tasks of job job_PullFromWikipedia_1422040355678 to complete... to complete...
+INFO  AbstractJobLauncher - 1 out of 1 tasks of job job_PullFromWikipedia_1422040355678 are running
+INFO  WikipediaExtractor - 5 record(s) retrieved for title NASA
+INFO  WikipediaExtractor - 5 record(s) retrieved for title LinkedIn
+INFO  WikipediaExtractor - 5 record(s) retrieved for title Parris_Cues
+INFO  WikipediaExtractor - 5 record(s) retrieved for title Barbara_Corcoran
+INFO  Task - Extracted 20 data records
+INFO  Fork-0 - Committing data of branch 0 of task task_PullFromWikipedia_1422040355678_0
+INFO  LocalTaskStateTracker2 - Task task_PullFromWikipedia_1422040355678_0 completed in 2334ms with state SUCCESSFUL
+INFO  AbstractJobLauncher - All tasks of job job_PullFromWikipedia_1422040355678 have completed
+INFO  TaskExecutor - Stopping the task executor 
+INFO  LocalTaskStateTracker2 - Stopping the local task state tracker
+INFO  AbstractJobLauncher - Publishing job data of job job_PullFromWikipedia_1422040355678 with commit policy COMMIT_ON_FULL_SUCCESS
+INFO  AbstractJobLauncher - Persisting job/task states of job job_PullFromWikipedia_1422040355678
+```
+
+* After the job is done, stop Gobblin by running
+
+`bin/gobblin-standalone.sh stop`
+
+The job output is written in `GOBBLIN_WORK_DIR/job-output` folder as an Avro file.
+
+To see the content of the job output, use the Avro tools to convert Avro to JSON. Download the latest version of Avro tools (e.g. avro-tools-1.7.7.jar):
+
+`curl -O http://central.maven.org/maven2/org/apache/avro/avro-tools/1.7.7/avro-tools-1.7.7.jar`
+
+and run 
+
+`java -jar avro-tools-1.7.7.jar tojson --pretty [job_output].avro > output.json`
+`
+
+`output.json` will contain all retrieved records in JSON format.
+
+Note that since this job configuration file we used ([wikipedia.pull](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull)) doesn't specify a job schedule, the job will run immediately and will run only once. To schedule a job to run at a certain time and/or repeatedly, set the `job.schedule` property with a cron-based syntax. For example, `job.schedule=0 0/2 * * * ?` will run the job every two minutes. See [this link](http://www.quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger) (Quartz CronTrigger) for more details.
+
+
+# Other Example Jobs
+
+Besides the Wikipedia example, we have another example job [SimpleJson](https://github.com/linkedin/gobblin/tree/master/gobblin-example/src/main/java/gobblin/example/simplejson), which extracts records from JSON files and store them in Avro files.
+
+To create your own jobs, simply implement the relevant interfaces such as [Source](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/Source.java), [Extractor](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/extractor/Extractor.java), [Converter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/converter/Converter.java) and [DataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/writer/DataWriter.java). In the job configuration file, set properties such as `source.class` and `converter.class` to point to these classes.
+
+On a side note: while users are free to directly implement the Extractor interface (e.g., WikipediaExtractor), Gobblin also provides several extractor implementations based on commonly used protocols, e.g., [RestApiExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/restapi/RestApiExtractor.java), [JdbcExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/jdbc/JdbcExtractor.java), [SftpExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/sftp/SftpExtractor.java), etc. Users are encouraged to extend these classes to take advantage of existing implementations.
\ No newline at end of file
diff --git a/Gobblin-Architecture.md b/Gobblin-Architecture.md
new file mode 100644
index 0000000..63e7ce8
--- /dev/null
+++ b/Gobblin-Architecture.md
@@ -0,0 +1,145 @@
+Table of Contents
+--------------------
+* [Gobblin Architecture Overview] (#gobblin-architecture-overview)
+* [Gobblin Job Flow] (#gobblin-job-flow)
+* [Gobblin Constructs] (#gobblin-constructs)
+* [Gobblin Task Flow] (#gobblin-task-flow)
+* [Job State Management] (#job-state-management)
+* [Handling of Failures] (#handling-of-failures)
+* [Job Scheduling] (#job-scheduling)
+
+Gobblin Architecture Overview
+--------------------
+Gobblin is built around the idea of extensibility, i.e., it should be easy for users to add new adapters or extend existing adapters to work with new sources and start extracting data from the new sources in any deployment settings. The architecture of Gobblin reflects this idea, as shown in Fig. 1 below:
+ 
+<p align="center">
+  <figure>    
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-Architecture-Overview.png alt="Gobblin Image" width="600">
+    <figcaption><br>Fig. 1: Gobblin architecture overview.<br></figcaption>
+  </figure>
+</p> 
+
+A Gobblin job is built on a set of constructs (illustrated by the light green boxes in the diagram above) that work together in a certain way and get the data extraction work done. All the constructs are pluggable through the job configuration and extensible by adding new or extending existing implementations. The constructs will be discussed in [Gobblin Constructs](https://github.com/linkedin/gobblin/wiki/Gobblin-Architecture#gobblin-constructs).
+
+A Gobblin job consists of a set of tasks, each of which corresponds to a unit of work to be done and is responsible for extracting a portion of the data. The tasks of a Gobblin job are executed by the Gobblin runtime (illustrated by the orange boxes in the diagram above) on the deployment setting of choice (illustrated by the red boxes in the diagram above). 
+
+The Gobblin runtime is responsible for running user-defined Gobblin jobs on the deployment setting of choice. It handles the common tasks including job and task scheduling, error handling and task retries, resource negotiation and management, state management, data quality checking, data publishing, etc.
+
+Gobblin currently supports two deployment modes: the standalone mode on a single node and the Hadoop MapReduce mode on a Hadoop cluster. We are also working on adding support for deploying and running Gobblin as a native application on [YARN](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). Details on deployment of Gobblin can be found in [Gobblin Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment).
+
+The running and operation of Gobblin are supported by a few components and utilities (illustrated by the blue boxes in the diagram above) that handle important things such as metadata management, state management, metric collection and reporting, and monitoring. 
+
+Gobblin Job Flow
+----------------
+A Gobblin job is responsible for extracting data in a defined scope/range from a data source and writing data to a sink such as HDFS. It manages the entire lifecycle of data ingestion in a certain flow as illustrated by Fig. 2 below.
+
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-Job-Flow.png alt="Gobblin Image" width="500">
+    <figcaption><br>Fig. 2: Gobblin job flow.<br></figcaption>
+  </figure>
+</p>
+
+1. A Gobblin job starts with an optional phase of acquiring a job lock. The purpose of doing this is to prevent the next scheduled run of the same job from starting until the current run finishes. This phase is optional because some job schedulers such as [Azkaban](http://azkaban.github.io/) is already doing this. 
+
+2. The next thing the job does is to create an instance of the `Source` class specified in the job configuration. A `Source` is responsible for partitioning the data ingestion work into a set of `WorkUnit`s, each of which represents a logic unit of work for extracting a portion of the data from a data source. A `Source` is also responsible for creating a `Extractor` for each `WorkUnit`. A `Extractor`, as the name suggests, actually talks to the data source and extracts data from it. The reason for this design is that Gobblin's `Source` is modeled after Hadoop's `InputFormat`, which is responsible for partitioning the input into `Split`s as well as creating a `RecordReader` for each `Split`. 
+
+3. From the set of `WorkUnit`s given by the `Source`, the job creates a set of tasks. A task is a runtime counterpart of a `WorkUnit`, which represents a logic unit of work. Normally, a task is created per `WorkUnit`. However, there is a special type of `WorkUnit`s called `MultiWorkUnit` that wraps multiple `WorkUnit`s for which multiple tasks may be created, one per wrapped `WorkUnit`. 
+
+4. The next phase is to launch and run the tasks. How tasks are executed and where they run depend on the deployment setting. In the standalone mode on a single node, tasks are running in a thread pool dedicated to that job, the size of which is configurable on a per-job basis. In the Hadoop MapReduce mode on a Hadoop cluster, tasks are running in the mappers (used purely as containers to run tasks). 
+
+5. After all tasks of the job finish (either successfully or unsuccessfully), the job publishes the data if it is OK to do so. Whether extracted data should be published is determined by the task states and the `JobCommitPolicy` used (configurable). More specifically, extracted data should be published if and only if any one of the following two conditions holds:
+
+  * `JobCommitPolicy.COMMIT_ON_PARTIAL_SUCCESS` is specified in the job configuration.
+  * `JobCommitPolicy.COMMIT_ON_FULL_SUCCESS` is specified in the job configuration and all tasks were successful.
+
+6. After the data extracted is published, the job persists the job/task states into the state store. When the next scheduled run of the job starts, it will load the job/task states of the previous run to get things like watermarks so it knows where to start.
+
+7. During its execution, the job may create some temporary working data that is no longer needed after the job is done. So the job will cleanup such temporary work data before exiting.  
+
+8. Finally, an optional phase of the job is to release the job lock if it is acquired at the beginning. This gives green light to the next scheduled run of the same job to proceed.  
+
+If a Gobblin job is cancelled before it finishes, the job will not persist any job/task state nor commit and publish any data (as the dotted line shows in the diagram).
+
+Gobblin Constructs
+--------------------------------
+As described above, a Gobblin job creates and runs tasks, each of which is responsible for extracting a portion of the data to be pulled by the job. A Gobblin task is created from a `WorkUnit` that represents a unit of work and serves as a container of job configuration at runtime. A task composes the Gobblin constructs into a flow to extract, transform, checks data quality on, and finally writes each extracted data record to the specified sink. Fig. 3 below gives an overview on the Gobblin constructs that constitute the task flows in a Gobblin job. 
+  
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-Constructs.png alt="Gobblin Image" width="800">
+    <figcaption><br>Fig. 3: Gobblin constructs.<br></figcaption>
+  </figure>
+</p>
+
+#### Source and Extractor
+
+A `Source` represents an adapter between a data source and Gobblin and is used by a Gobblin job at the beginning of the job flow. A `Source` is responsible for partitioning the data ingestion work into a set of `WorkUnit`s, each of which represents a logic unit of work for extracting a portion of the data from a data source. 
+
+A `Source` is also responsible for creating an `Extractor` for each `WorkUnit`. An `Extractor`, as the name suggests, actually talks to the data source and extracts data from it. The reason for this design is that Gobblin's `Source` is modeled after Hadoop's `InputFormat`, which is responsible for partitioning the input into `Split`s as well as creating a `RecordReader` for each `Split`. 
+
+Gobblin out-of-the-box provides some built-in `Source` and `Extractor` implementations that work with various types of of data sources, e.g., web services offering some Rest APIs, databases supporting JDBC, FTP/SFTP servers, etc. Currently, `Extractor`s are record-oriented, i.e., an `Extractor` reads one data record at a time, although internally it may choose to pull and cache a batch of data records. We are planning to add options for `Extractor`s to support byte-oriented and file-oriented processing.   
+
+#### Converter
+
+A `Converter` is responsible for converting both schema and data records and is the core construct for data transformation. `Converter`s are composible and can be chained together as long as each adjacent pair of `Converter`s are compatible in the input and output schema and data record types. This allows building complex data transformation from simple `Converter`s. Note that a `Converter` converts an input schema to one output schema. It may, however, convert an input data record to zero (`1:0` mapping), one (`1:1` mapping), or many (`1:N` mapping) output data records. Each `Converter` converts every output records of the previous `Converter`, except for the first one that converts the original extracted data record. When converting a data record, a `Converter` also takes in the _output converted_ schema of itself, except for the first one that takes in the original input schema. So each converter first converts the input schema and then uses the output schema in the conversion of each data record. The output schema of each converter is fed into both the converter itself for data record conversion and also the next converter. Fig. 4 explains how `Converter` chaining works using three example converters that have `1:1`, `1:N`, and `1:1` mappings for data record conversion, respectively.
+
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Converters-Explained.png alt="Gobblin Image" width="400">
+    <figcaption><br>Fig. 4: How converter chaining works.<br></figcaption>
+  </figure>
+</p>
+
+#### Quality Checker
+
+A `QualityChecker`, as the name suggests, is responsible for data quality checking. There are two types of `QualityChecker`s: one that checks individual data records and decides if each record should proceed to the next phase in the task flow and the other one that checks the entire task output and decides if data can be committed. We call the two types row-level `QualityChecker`s and task-level `QualityChecker`s, respectively. A `QualityChecker` can be `MANDATORY` or `OPTIONAL` and will participate in the decision on if quality checking passes if and only if it is `MANDATORY`. `OPTIONAL` `QualityChecker`s are informational only. Similarly to `Converter`s, more than one `QualityChecker` can be specified and in this case, quality checking passes if and only if all `MANDATORY` `QualityChecker`s give a `PASS`.     
+
+#### Fork Operator
+
+A `ForkOperator` is a type of control operators that allow a task flow to branch into multiple streams, each of which goes to a separately configured sink. This is useful for situations, e.g., that data records need to be written into multiple different storages, or that data records need to be written out to the same storage (say, HDFS) but in different forms for different downstream consumers. 
+
+#### Data Writer
+
+A `DataWriter` is responsible for writing data records to the sink it is associated to. Gobblin out-of-the-box provides an `AvroHdfsDataWriter` for writing data in [Avro](http://avro.apache.org/) format onto HDFS. Users can plugin their own `DataWriter`s by specifying a `DataWriterBuilder` class in the job configuration that Gobblin uses to build `DataWriter`s.
+
+#### Data Publisher
+A `DataPublisher` is responsible for publishing extracted data of a Gobblin job. Gobblin ships with a default `DataPublisher` that works with file-based `DataWriter`s such as the `AvroHdfsDataWriter` and moves data from the output directory of each task to a final job output directory. 
+
+Gobblin Task Flow
+--------------------------------
+
+Fig. 5 below zooms in further and shows the details on how different constructs are connected and composed to form a task flow. The same task flow is employed regardless of the deployment setting and where tasks are running.
+
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-Task-Flow.png alt="Gobblin Image" width="600">
+    <figcaption><br>Fig. 5: Gobblin task flow.<br></figcaption>
+  </figure>
+</p>
+
+A Gobblin task flow consists of a main branch and a number of forked branches coming out of a `ForkOperator`. It is optional to specify a `ForkOperator` in the job configuration. When no `ForkOperator` is specified in the job configuration, a Gobblin task flow uses a `IdentityForkOperator` by default with a single forked branch. The `IdentityForkOperator` simply connects the master branch and the _single_ forked branch and passes schema and data records between them. The reason behind this is it avoids special logic from being introduced into the task flow when a `ForkOperator` is indeed specified in the job configuration.
+     
+The master branch of a Gobblin task starts with schema extraction from the source. The extracted schema will go through a schema transformation phase if at least one `Converter` class is specified in the job configuration. The next phase is to repeatedly extract data records one at a time. Each extracted data record will also go through a transformation phase if at least one `Converter` class is specified. Each extracted (or converted if applicable) data record is fed into an optional list of row-level `QualityChecker`s.
+
+Data records that pass the row-level `QualityChecker`s will go through the `ForkOperator` and be further processed in the forked branches. The `ForkOperator` allows users to specify if the input schema or data record should go to a specific forked branch. If the input schema is specified _not_ to go into a particular branch, that branch will be ignored. If the input schema or data record is specified to go into _more than one_ forked branch, Gobblin assumes that the schema or data record class implements the `Copyable` interface and will attempt to make a copy before passing it to each forked branch. So it is very important to make sure the input schema or data record to the `ForkOperator` is an instance of `Copyable` if it is going into _more than one_ branch.
+
+Similarly to the master branch, a forked branch also processes the input schema and each input data record (one at a time) through an optional transformation phase and a row-level quality checking phase. Data records that pass the branch's row-level `QualityChecker`s will be written out to a sink by a `DataWriter`. Each forked branch has its own sink configuration and a separate `DataWriter`. 
+
+Upon successful processing of the last record, a forked branch applies an optional list of task-level `QualityChecker`s to the data processed by the branch in its entirety. If this quality checking passes, the branch commits the data and exits. 
+
+A task flow completes its execution once every forked branches commit and exit. During the execution of a task, a `TaskStateTracker` keeps track of the task's state and a core set of task metrics, e.g., total records extracted, records extracted per second, total bytes extracted, bytes extracted per second, etc.    
+
+Job State Management
+--------------------------------
+Typically a Gobblin job runs periodically on some schedule and each run of the job is extracting data incrementally, i.e., extracting new data or changes to existing data within a specific range since the last run of the job. To make incremental extraction possible, Gobblin must persist the state of the job upon the completion of each run and load the state of the previous run so the next run knows where to start extracting. Gobblin maintains a state store that is responsible for job state persistence. Each run of a Gobblin job reads the state store for the state of the previous run and writes the state of itself to the state store upon its completion. The state of a run of a Gobblin job consists of the job configuration and any properties set at runtime at the job or task level. 
+
+Out-of-the-box, Gobblin uses an implementation of the state store that serializes job and task states into Hadoop `SequenceFile`s, one per job run. Each job has a separate directory where its job and task state `SequenceFile`s are stored. The file system on which the `SequenceFile`-based state store resides is configurable.   
+
+Handling of Failures
+--------------------------------
+As a fault tolerance data ingestion framework, Gobblin employs multiple level of defenses against job and task failures. For job failures, Gobblin keeps track of the number of times a job fails consecutively and optionally sends out an alert email if the number exceeds a defined threshold so the owner of the job can jump in and investigate the failures. For task failures, Gobblin retries failed tasks in a job run up to a configurable maximum number of times. In addition to that, Gobblin also provides an option to enable retries of `WorkUnit`s corresponding to failed tasks across job runs. The idea is that if a task fails after all retries fail, the `WorkUnit` based on which the task gets created will be automatically included in the next run of the job if this type of retries is enabled. This type of retries is very useful in handling intermittent failures such as those due to temporary data source outrage.
+
+Job Scheduling
+--------------------------------
+Like mentioned above, a Gobblin job typically runs periodically on some schedule. Gobblin can be integrated with job schedulers such as [Azkaban](http://azkaban.github.io/),[Oozie](http://oozie.apache.org/), or Crontab. Out-of-the-box, Gobblin also ships with a built-in job scheduler backed by a [Quartz](http://quartz-scheduler.org/) scheduler, which is used as the default job scheduler in the standalone deployment. An important feature of Gobblin is that it decouples the job scheduler and the jobs scheduled by the scheduler such that different jobs may run in different deployment settings. This is achieved using the  abstraction `JobLauncher` that has different implementations for different deployment settings. For example, a job scheduler may have 5 jobs scheduled: 2 of them run locally on the same host as the scheduler using the `LocalJobLauncher`, whereas the rest 3 run on a Hadoop cluster somewhere using the `MRJobLauncher`. Which `JobLauncher` to use can be simply configured using the property `launcher.type`.
\ No newline at end of file
diff --git a/Gobblin-Build-Options.md b/Gobblin-Build-Options.md
new file mode 100644
index 0000000..f7f42c1
--- /dev/null
+++ b/Gobblin-Build-Options.md
@@ -0,0 +1,69 @@
+# Table of Contents
+
+- [Introduction](#introduction)
+- [Options](#options)
+    - [Versions](#versions)
+      - [Hadoop Version](#hadoop-version)
+      - [Hive Version](#hive-version)
+      - [Pegasus Version](#pegasus-version)
+      - [Byteman Version](#byteman-version)
+    - [Exclude Hadoop Dependencies from `gobblin-dist.tar.gz`](#exclude-hadoop-dependencies-from-gobblin-disttargz)
+    - [Exclude Hive Dependencies from `gobblin-dist.tar.gz`](#exclude-hive-dependencies-from-gobblin-disttargz)
+- [Custom Gradle Tasks](#custom-gradle-tasks)
+    - [Print Project Dependencies](#print-project-dependencies)
+- [Useful Gradle Commands](#useful-gradle-commands)
+    - [Skipping Tests](#skipping-tests)
+
+# Introduction
+
+This page outlines all the options that can be specified when building Gobblin using Gradle. The typical way of building Gobblin is to run:
+```
+./gradlew build
+```
+However, there are a number of parameters that can be passed into the above command to customize the build process.
+
+# Options
+
+These options just need to be added to the command above to take effect.
+
+### Versions
+
+#### Hadoop Version
+
+The Hadoop version can be specified by adding the option `-PhadoopVersion=[my-hadoop-version]`. If using a Hadoop version over `2.0.0` the option `-PuseHadoop2` must also be added.
+
+#### Hive Version
+
+The Hive version can be specified by adding the option `-PhiveVersion=[my-hive-version]`.
+
+#### Pegasus Version
+
+The Pegasus version can be specified by adding the option `-PpegasusVersion=[my-pegasus-version]`.
+
+#### Byteman Version
+
+The Byteman version can be specified by adding the option `-PbytemanVersion=[my-byteman-version]`.
+
+### Exclude Hadoop Dependencies from `gobblin-dist.tar.gz`
+
+Add the option `-PexcludeHadoopDeps` to exclude all Hadoop libraries from `gobblin-dist.tar.gz`.
+
+### Exclude Hive Dependencies from `gobblin-dist.tar.gz`
+
+Add the option `-PexcludeHiveDeps` to exclude all Hadoop libraries from `gobblin-dist.tar.gz`.
+
+# Custom Gradle Tasks
+
+A few custom built Gradle tasks.
+
+### Print Project Dependencies
+
+Executing this command will print out all the dependencies between the different Gobblin Gradle sub-projects: `./gradlew dotProjectDependencies`.
+
+# Useful Gradle Commands
+
+These commands make working with Gradle a little easier.
+
+### Skipping Tests
+
+Add `-x test` to the end of the build command.
\ No newline at end of file
diff --git a/Gobblin-Deployment.md b/Gobblin-Deployment.md
new file mode 100644
index 0000000..19d5d46
--- /dev/null
+++ b/Gobblin-Deployment.md
@@ -0,0 +1,161 @@
+Table of Contents
+--------------------
+* [Deployment Overview] (#Deployment-Overview)
+* [Standalone Architecture] (#Standalone-Architecture)
+* [Standalone Deployment] (#Standalone-Deployment)
+* [Hadoop MapReduce Architecture] (#Hadoop-MapReduce-Architecture)
+* [Hadoop MapReduce Deployment] (#Hadoop-MapReduce-Deployment)
+
+Deployment Overview <a name="Standalone-Overview"></a>
+--------------------
+One important feature of Gobblin is that it can be run on different platforms. Currently, Gobblin can run in standalone mode (which runs on a single machine), and on Hadoop MapReduce mode (which runs on a Hadoop cluster, both Hadoop 1.x and Hadoop 2.x are supported). This page summarizes the different deployment modes of Gobblin. It is important to understand the architecture of Gobblin in a specific deployment mode, so this page also describes the architecture of each deployment mode.  
+
+Gobblin supports Java 6 and up, and can run on either Hadoop 1.x or Hadoop 2.x. By default, Gobblin will build against Hadoop 1.x, in order to build against Hadoop 2.x, run `./gradlew -PuseHadoop2 clean build`. More information on how to build Gobblin can be found [here](https://github.com/linkedin/gobblin/blob/master/README.md). All directories/paths referred below are relative to `gobblin-dist`.
+
+Standalone Architecture <a name="Standalone-Architecture"></a>
+--------------------
+The following diagram illustrates the Gobblin standalone architecture. In the standalone mode, a Gobblin instance runs in a single JVM and tasks run in a thread pool, the size of which is configurable. The standalone mode is good for light-weight data sources such as small databases. The standalone mode is also the default mode for trying and testing Gobblin. 
+
+<p align="center"><img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-on-Single-Node.png alt="Gobblin Image" width="700"></p>
+
+In the standalone deployment, the `JobScheduler` runs as a daemon process that schedules and runs jobs using the so-called `JobLauncher`s. The `JobScheduler` maintains a thread pool in which a new `JobLauncher` is started for each job run. Gobblin ships with two types of `JobLauncher`s, namely, the `LocalJobLauncher` and `MRJobLauncher` for launching and running Gobblin jobs on a single machine and on Hadoop MapReduce, respectively. Which `JobLauncher` to use can be configured on a per-job basis, which means the `JobScheduler` can schedule and run jobs in different deployment modes. This section will focus on the `LocalJobLauncher` for launching and running Gobblin jobs on a single machine. The `MRJobLauncher` will be covered in a later section on the architecture of Gobblin on Hadoop MapReduce.  
+
+Each `LocalJobLauncher` starts and manages a few components for executing tasks of a Gobblin job. Specifically, a `TaskExecutor` is responsible for executing tasks in a thread pool, whose size is configurable on a per-job basis. A `LocalTaskStateTracker` is responsible for keep tracking of the state of running tasks, and particularly updating the task metrics. The `LocalJobLauncher` follows the steps below to launch and run a Gobblin job:    
+
+1. Starting the `TaskExecutor` and `LocalTaskStateTracker`.
+2. Creating an instance of the `Source` class specified in the job configuration and getting the list of `WorkUnit`s to do.
+3. Creating a task for each `WorkUnit` in the list, registering the task with the `LocalTaskStateTracker`, and submitting the task to the `TaskExecutor` to run.
+4. Waiting for all the submitted tasks to finish.
+5. Upon completion of all the submitted tasks, collecting tasks states and persisting them to the state store, and publishing the extracted data.  
+
+Standalone Deployment <a name="Standalone-Deployment"></a>
+--------------------
+
+Gobblin ships with a script `bin/gobblin-standalone.sh` for starting and stopping the standalone Gobblin daemon on a single node. Below is the usage of this launch script:
+
+```
+gobblin-standalone.sh <start | status | restart | stop> [OPTION]
+Where:
+  --workdir <job work dir>                       Gobblin's base work directory: if not set, taken from ${GOBBLIN_WORK_DIR}
+  --jars <comma-separated list of job jars>      Job jar(s): if not set, lib is examined
+  --conf <directory of job configuration files>  Directory of job configuration files: if not set, taken from 
+  --help                                         Display this help and exit
+```
+
+In the standalone mode, the `JobScheduler`, upon startup, will pick up job configuration files from a user-defined directory and schedule the jobs to run. The job configuration file directory can be specified using the `--conf` command-line option of `bin/gobblin-standalone.sh` or through an environment variable named `GOBBLIN_JOB_CONFIG_DIR`. The `--conf` option takes precedence and will take the value of `GOBBLIN_JOB_CONFIG_DIR` if not set. Note that this job configuration directory is different from `conf`, which stores Gobblin system configuration files, in which deployment-specific configuration properties applicable to all jobs are stored. In comparison, job configuration files store job-specific configuration properties such as the `Source` and `Converter` classes used.
+
+The `JobScheduler` is backed by a [Quartz](http://quartz-scheduler.org/) scheduler and it supports cron-based triggers using the configuration property `job.schedule` for defining the cron schedule. Please refer to this [tutorial](http://quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-06) for more information on how to use and configure a cron-based trigger.  
+ 
+Gobblin needs a working directory at runtime, which can be specified using the command-line option `--workdir` of `bin/gobblin-standalone.sh` or an environment variable named `GOBBLIN_WORK_DIR`. The `--workdir` option takes precedence and will take the value of `GOBBLIN_WORK_DIR` if not set. Once started, Gobblin will create some subdirectories under the root working directory, as follows: 
+```
+GOBBLIN_WORK_DIR\
+    task-staging\ # Staging area where data pulled by individual tasks lands
+    task-output\  # Output area where data pulled by individual tasks lands
+    job-output\   # Final output area of data pulled by jobs
+    state-store\  # Persisted job/task state store
+    metrics\      # Metrics store (in the form of metric log files), one subdirectory per job.
+```
+
+Before starting the Gobblin standalone daemon, make sure the environment variable `JAVA_HOME` is properly set to point to the home directory of the Java Runtime Environment (JRE) of choice. When starting the JVM process of the Gobblin standalone daemon, a default set of jars will be included on the `classpath`. Additional jars needed by your Gobblin jobs can be specified as a comma-separated list using the command-line option `--jars` of `bin/gobblin-standaline.sh`. If the `--jar` option is not set, only the jars under `lib` will be included.
+
+Below is a summary of the environment variables that may be set for standalone deployment.
+
+* `GOBBLIN_JOB_CONFIG_DIR`: this variable defines the directory where job configuration files are stored. 
+* `GOBBLIN_WORK_DIR`: this variable defines the working directory for Gobblin to operate.
+* `JAVA_HOME`: this variable defines the path to the home directory of the Java Runtime Environment (JRE) used to run the daemon process.
+
+To start the Gobblin standalone daemon, run the following command:
+```
+bin/gobblin-standalone.sh start [OPTION]
+```
+After the Gobblin standalone daemon is started, the logs can be found under `logs`. Gobblin uses [SLF4J](http://www.slf4j.org/) and the [slf4j-log4j12](http://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12) binding for logging. The [log4j](http://logging.apache.org/log4j/1.2/) configuration can be found at `conf/log4j-standalone.xml`.
+
+By default, the Gobblin standalone daemon uses the following JVM settings. Change the settings in `bin/gobblin-standalone.sh` if necessary for your deployment.
+
+```
+-Xmx2g -Xms1g
+-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
+-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution
+-XX:+UseCompressedOops
+-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<gobblin log dir>
+```
+
+To restart the Gobblin standalone daemon, run the following command:
+```
+bin/gobblin-standalone.sh restart [OPTION]
+```
+
+To stop the running Gobblin standalone daemon, run the following command:
+```
+bin/gobblin-standalone.sh stop
+```
+
+If there are any additional jars that any jobs depend on, the jars can be added to the classpath using the `--jars` option.
+
+The script also supports checking the status of the running daemon process using the `bin/gobblin-standalone.sh status` command.
+
+Hadoop MapReduce Architecture <a name="Hadoop-MapReduce-Architecture"></a>
+--------------------
+The digram below shows the architecture of Gobblin on Hadoop MapReduce. As the diagram shows, a Gobblin job runs as a mapper-only MapReduce job that runs tasks of the Gobblin job in the mappers. The basic idea here is to use the mappers purely as _containers_ to run Gobblin tasks. This design also makes it easier to integrate with Yarn. Unlike in the standalone mode, task retries are not handled by Gobblin itself in the Hadoop MapReduce mode. Instead, Gobblin relies on the task retry mechanism of Hadoop MapReduce.  
+
+<p align="center"><img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-on-Hadoop-MR.png alt="Gobblin Image" width="700"></p>
+
+In this mode, a `MRJobLauncher` is used to launch and run a Gobblin job on Hadoop MapReduce, following the steps below:
+
+1. Creating an instance of the `Source` class specified in the job configuration and getting the list of `WorkUnit`s to do.
+2. Serializing each `WorkUnit` into a file on HDFS that will be read later by a mapper.
+3. Creating a file that lists the paths of the files storing serialized `WorkUnit`s.
+4. Creating and configuring a mapper-only Hadoop MapReduce job that takes the file created in step 3 as input.
+5. Starting the MapReduce job to run on the cluster of choice and waiting for it to finish.
+6. Upon completion of the MapReduce job, collecting tasks states and persisting them to the state store, and publishing the extracted data. 
+
+A mapper in a Gobblin MapReduce job runs one or more tasks, depending on the number of `WorkUnit`s to do and the (optional) maximum number of mappers specified in the job configuration. If there is no maximum number of mappers specified in the job configuration, each `WorkUnit` corresponds to one task that is executed by one mapper and each mapper only runs one task. Otherwise, if a maximum number of mappers is specified and there are more `WorkUnit`s than the maximum number of mappers allowed, each mapper may handle more than one `WorkUnit`. There is also a special type of `WorkUnit`s named `MultiWorkUnit` that group multiple `WorkUnit`s to be executed together in one batch in a single mapper.
+
+A mapper in a Gobblin MapReduce job follows the step below to run tasks assigned to it:
+
+1. Starting the `TaskExecutor` that is responsible for executing tasks in a configurable-size thread pool and the `MRTaskStateTracker` that is responsible for keep tracking of the state of running tasks in the mapper. 
+2. Reading the next input record that is the path to the file storing a serialized `WorkUnit`.
+3. Deserializing the `WorkUnit` and adding it to the list of `WorkUnit`s to do. If the input is a `MultiWorkUnit`, the `WorkUnit`s it wraps are all added to the list. Steps 2 and 3 are repeated until all assigned `WorkUnit`s are deserialized and added to the list.
+4. For each `WorkUnit` on the list of `WorkUnit`s to do, creating a task for the `WorkUnit`, registering the task with the `MRTaskStateTracker`, and submitting the task to the `TaskExecutor` to run. Note that the tasks may run in parallel if the `TaskExecutor` is [configured](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary#taskexecutorthreadpoolsize) to have more than one thread in its thread pool.
+4. Waiting for all the submitted tasks to finish.
+5. Upon completion of all the submitted tasks, writing out the state of each task into a file that will be read by the `MRJobLauncher` when collecting task states.
+6. Going back to step 2 and reading the next input record if available.
+
+Hadoop MapReduce Deployment <a name="Hadoop-MapReduce-Deployment"></a>
+--------------------
+Gobblin out-of-the-box ships with a script `bin/gobblin-mapreduce.sh` for launching a Gobblin job on Hadoop MapReduce. Below is the usage of this launch script:
+
+```
+Usage: gobblin-mapreduce.sh [OPTION] --conf <job configuration file>
+Where OPTION can be:
+  --jt <job tracker / resource manager URL>      Job submission URL: if not set, taken from ${HADOOP_HOME}/conf
+  --fs <file system URL>                         Target file system: if not set, taken from ${HADOOP_HOME}/conf
+  --jars <comma-separated list of job jars>      Job jar(s): if not set, lib is examined
+  --workdir <job work dir>                       Gobblin's base work directory: if not set, taken from ${GOBBLIN_WORK_DIR}
+  --projectversion <version>                     Gobblin version to be used. If set, overrides the distribution build version
+  --logdir <log dir>                             Gobblin's log directory: if not set, taken from ${GOBBLIN_LOG_DIR} if present. Otherwise ./logs is used
+  --help                                         Display this help and exit
+```
+
+It is assumed that you already have Hadoop (both MapReduce and HDFS) setup and running somewhere. Before launching any Gobblin jobs on Hadoop MapReduce, check the Gobblin system configuration file located at `conf/gobblin-mapreduce.properties` for property `fs.uri`, which defines the file system URI used. The default value is `hdfs://localhost:8020`, which points to the local HDFS on the default port 8020. Change it to the right value depending on your Hadoop/HDFS setup. For example, if you have HDFS setup somwhere on port 9000, then set the property as follows:
+
+```
+fs.uri=hdfs://<namenode host name>:9000/
+```
+
+Note that if the option `--fs` of `bin/gobblin-mapreduce.sh` is set, the value of `--fs` should be consistent with the value of `fs.uri`. 
+
+All job data and persisted job/task states will be written to the specified file system. Before launching any jobs, make sure the environment variable `HADOOP_BIN_DIR` is set to point to the `bin` directory under the Hadoop installation directory. Similarly to the standalone deployment, the Hadoop MapReduce deployment also needs a working directory, which can be specified using the command-line option `--workdir` of `bin/gobblin-mapreduce.sh` or the environment variable `GOBBLIN_WORK_DIR`. Note that the Gobblin working directory will be created on the file system specified above. Below is a summary of the environment variables that may be set for deployment on Hadoop MapReduce:
+
+* `GOBBLIN_WORK_DIR`: this variable defines the working directory for Gobblin to operate.
+* `HADOOP_BIN_DIR`: this variable defines the path to the `bin` directory under the Hadoop installation directory.
+
+This setup will have the minimum set of jars Gobblin needs to run the job added to the Hadoop `DistributedCache` for use in the mappers. If a job has additional jars needed for task executions (in the mappers), those jars can also be included by using the `--jars` option of `bin/gobblin-mapreduce.sh` or the following job configuration property in the job configuration file:
+
+```
+job.jars=<comma-separated list of jars the job depends on>
+```
+
+The `--projectversion` controls which version of the Gobblin jars to look for. Typically, this value is dynamically set during the build process. Users should use the `bin/gobblin-mapreduce.sh` script that is copied into the `gobblin-distribution-[project-version].tar.gz` file. This version of the script has the project version already set, in which case users do not need to specify the `--projectversion` parameter. If users want to use the `gobblin/bin/gobblin-mapreduce.sh` script they have to specify this parameter.
+
+The `--logdir` parameter controls the directory where log files are written to. If not set log files are written under a the `./logs` directory.
\ No newline at end of file
diff --git a/Gobblin-Metrics-Architecture.md b/Gobblin-Metrics-Architecture.md
new file mode 100644
index 0000000..bf1df16
--- /dev/null
+++ b/Gobblin-Metrics-Architecture.md
@@ -0,0 +1,67 @@
+![Gobblin Metrics Architecture Diagram](https://raw.githubusercontent.com/wiki/linkedin/gobblin/images/Gobblin-Metrics-Architecture.png)
+
+Metric Context
+==============
+
+Metric contexts are organized hierarchically in a tree. Each metric context has a set of Tags, each of which is just key-value pair. The keys of all tags are strings, while the values are allowed to be of any type. However, most reporters will serialize the tag values using their `toString()` method.
+
+Children contexts automatically inherit the tags of their parent context, and can add more tags, or override tags present in the parent. Tags can only be defined during construction of each metric context, and are immutable afterwards. This simplifies the inheritance and overriding of metrics. 
+
+Metric Contexts are created using `MetricContext.Builder`, which allows adding tags and specifying the parent. This is the only time tags can be added to the context. When building, the tags of the parent and the new tags are merged to obtain the final tags for this context. When building a child context for Metric Context `context`, calling `context.childBuilder(String)` generates a Builder with the correct parent.
+
+Each metric context contains the following instance variables:
+* A `String` `name`. The name is not used by the core metrics engine, but can be accessed by users to identify the context.
+* A reference to the parent metric context, or null if it has no parent.
+* A list of children metric context references, stored as soft references.
+* An object of type [Tagged](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/Tagged.java) containing the tags for this metric context.
+* A `Set` of notification targets. Notification targets are objects of type [Function](http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/base/Function.html)<[Notification](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/notification/Notification.java), Void> which are all called every time there is a new notification. Notifications can be submitted to the Metric Context using the method `sendNotification(Notification)`. Notification targets can be added using `addNotificationTarget(Function<Notification, Void>)`.
+* A lazily instantiated `ExecutorService` used for asynchronously executing the notification targets. The executor service will only be started the first time there is a notification and the number of notification targets is positive.
+* A `ConcurrentMap` from metric names to `Metric` for all metrics registered in this Metric Context. Metrics can be added to this map using the `register(Metric)`, `register(String, Metric)`, or `registerAll(MetricSet)`, although it is recommended to instead use the methods to create and register the metrics. Metric Context implements getter methods for all metrics, as well as for each type of metric individually (`getMetrics`, `getGauges`, `getCounters`, `getHistograms`, `getMeters`, `getTimers`).
+
+Metrics
+=======
+
+All metrics extend the interface [ContextAwareMetric](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/ContextAwareMetric.java). Each metric type in Dropwizard Metrics is extended to a Context Aware type: `ContextAwareCounter`, `ContextAwareGauge`, `ContextAwareHistogram`, `ContextAwareMeter`, `ContextAwareTimer`.
+
+Context Aware metrics all always created from the Metric Context where they will be registered. For example, to get a counter under Metric Context `context`, the user would call `context.counter("counter.name")`. This method first checks all registered metrics in the Metric Context to find a counter with that name, if it succeeds, it simply returns that counter. If a counter with that name has not been registered in `context`, then a new `ContextAwareCounter` is created and registered in `context`.
+
+On creation, each Context Aware metric (except Gauges) checks if its parent Metric Context has parents itself. If so, then it automatically creates a metric of the same type, with the same name, in that parent. This will be repeated recursively until, at the end, all ancestor Metric Contexts will all contain a context aware metric of the same type and with the same name. Every time the context aware metric is updated, the metric will automatically call the same update method, with the same update value, for its parent metric. Again, this will continue recursively until the corresponding metrics in all ancestor metric contexts are updated by the same value. If multiple children of a metric context `context` all have metrics with the same name, when either of them is updated, the corresponding metric in `context` will also get updated. In this way, the corresponding metric in `context` will aggregate all updated to the metrics in the children context.
+
+Users can also register objects of type `com.codahale.metrics.Metric` with any Metric Context, but they will not be auto-aggregated.
+
+Events
+======
+
+Events are objects of type [GobblinTrackingEvent](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/avro/GobblinTrackingEvent.avsc), which is a type generated from an Avro schema. Events have:
+* A `namespace`.
+* A `name`.
+* A `timestamp`.
+* A `Map<String,String>` of `metadata`.
+
+Events are submitted using the `MetricContext#submitEvent(GobblinTrackingEvent)` method. When called, this method packages the event into an [EventNotification](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/notification/EventNotification.java) and submits it to the metric context using the method `MetricContext#sendNotification(Notification)`. This notification is passed to all metrics context ancestors. Each notification target of each ancestor metric context will receive the EventNotification. Events are not stored by any Metric Context, so the notification targets need to handle these events appropriately.
+
+Events can be created manually using Avro constructors, and using the method `context.submitEvent(GobblinTrackinEvent)`, but this is unfriendly when trying to build events incrementally, especially when using metadata. To address this, users can instead use [EventSubmitter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/event/EventSubmitter.java) which is an abstraction around the Avro constructor for GobblinTrackingEvent.
+
+Event Submitter
+---------------
+
+An event submitter is created using an `EventSubmitter.Builder`. It is associated with a Metric Context where it will submit all events, and it contains a `namespace` and default `metadata` that will be applied to all events generated through the event submitter. The user can then call `EventSubmitter#submit` which will package the event with the provided metadata and submit it to the Metric Context.
+
+Reporters
+=========
+
+Reporters export the metrics and/or events of a metric context to a sink. Reporters extend the interface `com.codahale.metrics.Reporter`. Most reporters will attach themselves to a Metric Context. The reporter can then navigate the Metric Context tree where the Metric Context belongs, get tags and metrics, get notified of events, and export them to the sink.
+
+The two best entry points for developing reporters are [RecursiveScheduledMetricReporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/RecursiveScheduledMetricReporter.java) and [EventReporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/EventReporter.java). These classes do most of the heavy lifting for reporting metrics and events respectively. They are both scheduled reporters, meaning the will export their metrics / events following a configurable schedule.
+
+RecursiveScheduleMetricReporter
+-------------------------------
+
+This abstract reporter base is used for emitting metrics on a schedule. The reporter, on creation, is attached to a particular Metric Report. Every time the reporter is required to emit events, the reporter selects the attached Metric Context and all descendant Metric Contexts. For each of these metric contexts, it queries the Metric Context for all metrics, filtered by an optional user supplied filter, and then calls `RecursiveScheduledMetricReporter#report`, providing the method with all appropriate metrics and tags. Developers need only implement the report method.
+
+EventReporter
+-------------
+
+This abstract reporter base is used for emitting events. The EventReporter, on creation, takes a Metric Context it should listen to. It registers a callback function as a notification target for that Metric Context. Every time the callback is called, if the notification is of type `EventNotification`, the EventReporter unpacks the event and adds it to a `LinkedBlockingQueue` of events.
+
+On a configurable schedule, the event reporter calls the abstract method `EventReporter#reportEventQueue(Queue<GobblinTrackingEvent>)`, which should be implemented by the concrete subclass. To keep memory limited, the event queue has a maximum size. Whenever the queue reaches a size 2/3 of the maximum size, `EventReporter#reportEventQueue` is called immediately.
\ No newline at end of file
diff --git a/Gobblin-Metrics-Performance.md b/Gobblin-Metrics-Performance.md
new file mode 100644
index 0000000..9950033
--- /dev/null
+++ b/Gobblin-Metrics-Performance.md
@@ -0,0 +1,50 @@
+Generalities
+============
+These are the main resources used by Gobblin Metrics:
+* CPU time for updating metrics: scales with number of metrics and frequency of metric update
+* CPU time for metric emission and lifecycle management: scales with number of metrics and frequency of emission
+* Memory for storing metrics: scales with number of metrics and metric contexts
+* I/O for reporting metrics: scales with number of metrics and frequency of emission
+* External resources for metrics emission (e.g. HDFS space, Kafka queue space, etc.): scales with number of metrics and frequency of emission
+
+This page focuses on the CPU time for updating metrics, as these updates are usually in the critical performance path of an application. Each metric requires bounded memory, and having a few metrics should have no major effect on memory usage. Metrics and Metric Contexts are cleaned when no longer needed to further reduce this impact. Resources related to metric emission can always be reduced by reporting fewer metrics or decreasing the reporting frequency when necessary.
+
+How to interpret these numbers
+==============================
+This document provides maximum QPS achievable by Gobblin Metrics. If the application attempts to update metrics at a higher rate than this, the metrics will effectively throttle the application. If, on the other hand, the application only updates metrics at 10% or less of the maximum QPS, the performance impact of Gobblin Metrics should be minimal.
+
+### What if I need larger QPS?
+If your application needs larger QPS, the recommendation is to batch metrics updates. Counters and Meters offer the option to increase their values by multiple units at a time. Histograms and Timers don't offer this option, but for very high throughput applications, randomly registering for example only 10% of the values will not affect statistics significantly (although you will have to adjust timer and histogram counts manually).
+
+Update Metrics Performance
+==========================
+Metric updates are the most common interaction with Gobblin Metrics in an application. Every time a counter is increased, a meter is marked, or entries are added to histograms and timers, an update happens. As such, metric updates are the most likely to impact application performance.
+
+We measured the max number of metric updates that can be executed per second. The performance of different metric types is different. Also, the performance of metrics depends on the depth in the Metric Context tree at which they are created. Metrics in the Root Metric Context are the fastest, while metrics deep in the tree are slower because they have to update all ancestors as well. The following table shows reference max QPS in updates per second as well as the equivalent single update delay in nanoseconds for each metric type in a i7 processor:
+
+| Metric | Root level | Depth: 1 | Depth: 2 | Depth: 3 |
+|--------|------------|----------|----------|----------|
+| Counter | 76M (13ns) | 39M (25ns) | 29M (34ns) | 24M (41ns) |
+| Meter | 11M (90ns) | 7M (142ns) | 4.5M (222ns) | 3.5M (285ns) |
+| Histogram | 2.4M (416ns) | 2.4M (416ns) | 1.8M (555ns) | 1.3M (769ns) |
+| Timer | 1.4M (714ns) | 1.4M (714ns) | 1M (1us) | 1M (1us) |
+
+Multiple metric updates per iteration
+-------------------------------------
+If a single thread updates multiple metrics, the average delay for metric updates will be the sum of the delays of each metric independently. For example, if each iteration the application is updating two counters, one timer, and one histogram at the root metric context level, the total delay will be `13ns + 13ns + 416ns + 714ns = 1156ns` for a max QPS of `865k`.
+
+Multi-threading
+---------------
+Updating metrics with different names can be parallelized efficiently, e.g. different threads updating metrics with different names will not interfere with each other. However, multiple threads updating metrics with the same names will interfere with each other, as the updates of common ancestor metrics are synchronized (to provide with auto-aggregation). In experiments we observed that updating metrics with the same name from multiple threads increases the maximum QPS sub-linearly, saturating at about 3x the single threaded QPS, i.e. the total QPS of metrics updates across any number of threads will not go about 3x the numbers shown in the table above.
+
+On the other hand, if each thread is updating multiple metrics, the updates might interleave with each other, potentially increasing the max total QPS. In the example with two counters, one timer, and one histogram, one thread could be updating the timer while another could be updating the histogram, reducing interference, but never exceeding the max QPS of the single most expensive metric. Note that there is no optimization in code to produce this interleaving, it is merely an effect of synchronization, so the effect might vary.
+
+Running Performance Tests
+-------------------------
+To run the performance tests
+```bash
+cd gobblin-metrics
+../gradlew performance
+```
+
+After finishing, it should create a TestNG report at `build/gobblin-metrics/reports/tests/packages/gobblin.metrics.performance.html`. Nicely printed performance results are available on the Output tab. 
diff --git a/Gobblin-Metrics.md b/Gobblin-Metrics.md
new file mode 100644
index 0000000..24b4b9b
--- /dev/null
+++ b/Gobblin-Metrics.md
@@ -0,0 +1,101 @@
+Gobblin Metrics is a metrics library for emitting metrics and events instrumenting java applications. 
+Metrics and events are easy to use and enriched with tags. Metrics allow full granularity, auto-aggregation, and configurable 
+reporting schedules. Gobblin Metrics is based on [Dropwizard Metrics](http://metrics.dropwizard.io/), enhanced to better support 
+modular applications (by providing hierarchical, auto-aggregated metrics) and their monitoring / auditing.
+
+Quick Start
+===========
+
+The following code excerpt shows the functionality of Gobblin Metrics.
+
+```java
+// ========================================
+// METRIC CONTEXTS
+// ========================================
+
+// Create a Metric context with a Tag
+MetricContext context = MetricContext.builder("MyMetricContext").addTag(new Tag<Integer>("key", value)).build();
+// Create a child metric context. It will automatically inherit tags from parent.
+// All metrics in the child context will be auto-aggregated in the parent context.
+MetricContext childContext = context.childBuilder("childContext").build();
+
+// ========================================
+// METRICS
+// ========================================
+
+// Create a reporter for metrics. This reporter will write metrics to STDOUT.
+OutputStreamReporter.Factory.newBuilder().build(new Properties());
+// Start all metric reporters.
+RootMetricContext.get().startReporting();
+
+// Create a counter.
+Counter counter = childContext.counter("my.counter.name");
+// Increase the counter. The next time metrics are reported, "my.counter.name" will be reported as 1.
+counter.inc();
+
+// ========================================
+// EVENTS
+// ========================================
+
+// Create an reporter for events. This reporter will write events to STDOUT.
+ScheduledReporter eventReporter = OutputStreamEventReporter.forContext(context).build();
+eventReporter.start();
+
+// Create an event submitter, can include default metadata.
+EventSubmitter eventSubmitter = new EventSubmitter.Builder(context, "events.namespace").addMetadata("metadataKey", "value").build();
+// Submit an event. Its metadata will contain all tags in context, all metadata in eventSubmitter,
+// and any additional metadata specified in the call.
+// This event will be displayed the next time the event reporter flushes.
+eventSubmitter.submit("EventName", "additionalMetadataKey", "value");
+```
+
+Metric Contexts
+===============
+
+A metric context is a context from which users can emit metrics and events. These contexts contain a set of tags, each tag 
+being a key-value pair. Contexts are hierarchical in nature: each context has one parent and children. They automatically 
+inherit the tags of their parent, and can define or override more tags.
+
+Generally, a metric context is associated with a specific instance of an object that should be instrumented. 
+Different instances of the same object will have separate instrumentations. However, each context also aggregates 
+all metrics defined by its descendants, providing with a full range of granularities for reporting. 
+With this functionality if, for example, an application has 10 different data writers,  users can monitor each writer 
+individually, or all at the same time.
+
+Metrics
+=======
+
+Metrics are used to monitor the progress of an application. Metrics are emitted regularly following a schedule and represent 
+the current state of the application. The metrics supported by Gobblin Metrics are the same ones as those supported 
+by [Dropwizard Metrics Core](http://metrics.dropwizard.io/3.1.0/manual/core/), adapted for tagging and auto-aggregation. 
+The types supported are:
+* Counter: simple long counter.
+* Meter: counter with added computation of the rate at which the counter is changing.
+* Histogram: stores a histogram of a value, divides all of the values observed into buckets, and reports the count for each bucket.
+* Timer: a histogram for timing information.
+* Gauge: simply stores a value. Gauges are not auto-aggregated because the aggregation operation is context-dependent.
+
+Events
+======
+
+Events are fire-and-forget messages indicating a milestone in the execution of an application, 
+along with metadata that can provide further information about that event (all tags of the metric context used to generate 
+the event are also added as metadata).
+
+Reporters
+=========
+
+Reporters periodically output the metrics and events to particular sinks following a configurable schedule. Events and Metrics reporters are kept separate to allow users more control in case they want to emit metrics and events to separate sinks (for example, different files). Reporters for a few sinks are implemented by default, but additional sinks can be implemented by extending the `RecursiveScheduledMetricReporter` and the `EventReporter`. Each of the included reporters has a simple builder.
+
+The metric reporter implementations included with Gobblin Metrics are:
+* OutputStreamReporter: Supports any output stream, including STDOUT and files.
+* KafkaReporter: Emits metrics to a Kafka topic as Json messages.
+* KafkaAvroReporter: Emits metrics to a Kafka topic as Avro messages.
+* InfluxDBReporter: Emits metrics to Influx DB.
+* GraphiteReporter: Emits metrics to Graphite.
+* HadoopCounterReporter: Emits metrics as Hadoop counters.
+
+The event reporter implementations included with Gobblin metrics are:
+* OutputStreamEventReporter: Supports any output stream, including STDOUT and files.
+* KafkaEventReporter: Emits events to Kafka as Json messages.
+* KafkaEventAvroReporter: Emits events to Kafka as Avro messages.
\ No newline at end of file
diff --git a/Gobblin-Metrics:-next-generation-instrumentation-for-applications.md b/Gobblin-Metrics:-next-generation-instrumentation-for-applications.md
new file mode 100644
index 0000000..88cb757
--- /dev/null
+++ b/Gobblin-Metrics:-next-generation-instrumentation-for-applications.md
@@ -0,0 +1,33 @@
+<p>
+Long running, complex applications are prone to operational issues. Good instrumentation, monitoring, and accessible historical information on its execution helps diagnose them, and many times even prevent them. For Gobblin ingestion, we wanted to add this instrumentation to all parts of the application. Some of the requirements we had were:
+<ul>
+<li> Report progress of the ingestion processing for each job, task, and module. Many reports would be almost identical, just covering different instances of the same module.
+<li> Report major milestones in the processing: when a Gobblin job starts, when the ingestion of a dataset finishes, when files of a dataset get committed, etc.
+<li> Provide various levels of granularity: totals aggregations give a quick view of the performance of the application, but detailed, instance level reports are essential for debugging.
+<li> Easily switch between sinks where reports and events are emitted.
+<li> Generate queriable reports.
+</ul>
+Among existing solutions, we found <a href="http://metrics.dropwizard.io/">Dropwizard Metrics</a> to be the closest to what we needed, but it was not enough, so we developed Gobblin Metrics.
+</p>
+
+<p>
+Gobblin Metrics is a metrics library, which is based on Dropwizard Metrics but extends it considerably to provide all the amazing features that make monitoring and execution auditing easy. The library is designed for modular applications: the application is a set of module instances, organized hierarchically. Following this pattern, the metrics library uses Metric Contexts organized hierarchically to instrument instances of classes and modules (see figure below for an example of this hierarchy for Gobblin ingestion). Each metric context in the tree contains a set of tags describing the context where particular metrics are being collected. Children in this tree automatically inherit the tags of their parents, giving a rich description of each instrumented object in the application. Of course, Gobblin Metrics is not limited to this kind of applications; we have taken advantage of all the other features of the library in much flatter programs.
+</p>
+
+<img src="https://raw.githubusercontent.com/wiki/linkedin/gobblin/images/Gobblin-Metrics-Example.png" alt="Gobblin Metrics Example">
+
+<p>
+Each metric context manages a set of metrics (like counters, timers, meters, and histograms), providing information for instance on the throughput for each reader and writer, serialization/deserialization times, etc. Metrics are automatically aggregated in the metric context tree: for example, while each writer is computing is throughput independently, we are also computing in real-time the throughput across each task (containing many writers) and each job (containing many tasks).
+</p>
+
+<p>
+  Gobblin Metrics also introduces the concept of events. Events are fire-and-forget reports of milestones of the execution, enriched by metadata relevant to that milestone, plus all of the context information derived from tags. For example, every time we finish processing a file, we emit an event cotaining detailed information like the number of records read, number of records written, and the location where the file was published. The events can be used to get historical information on previous executions, as well as detect and report failures.
+</p>
+
+<p>
+  Finally, the library would not be complete without options to actually export metrics and events to external sinks. Following Dropwizard Metric's model, we use Reporters to write out metrics and events. A few sinks are implemented by default, which we already use heavily: Kafka, OutputStream, Graphite, and InfluxDB. However, any developer can easily implement their own sinks. There is already logic to publish metrics and events as Avro records. Combining this with Hive / Pig, or any other data query engine, allows users to easily generate reports about the execution of their application. In the future we plan to follow the model of Log4j, using configuration files rather than hard coded reporters for metric reporting, which will permit users to quickly change their sinks as well as precisely which metrics and events get reported without touching code.
+</p>
+
+<p>
+  To learn more about Gobblin Metrics, check out the <a href="https://github.com/linkedin/gobblin/wiki/Gobblin%20Metrics">Wiki</a> and the <a href="https://github.com/linkedin/gobblin">Gobblin project</a> in Github.
+</p>
\ No newline at end of file
diff --git a/Gobblin-Schedulers.md b/Gobblin-Schedulers.md
new file mode 100644
index 0000000..becff84
--- /dev/null
+++ b/Gobblin-Schedulers.md
@@ -0,0 +1,62 @@
+# Introduction
+
+Gobblin jobs can be scheduled on a recurring basis using a few different tools. Gobblin ships with a built in [Quartz Scheduler](https://quartz-scheduler.org/). Gobblin also integrates with a few other third party tools.
+
+# Quartz
+
+Gobblin has a built in Quartz scheduler as part of the [`JobScheduler`](https://github.com/linkedin/gobblin/blob/master/gobblin-scheduler/src/main/java/gobblin/scheduler/JobScheduler.java) class. This class integrates with the Gobblin [`SchedulerDaemon`](https://github.com/linkedin/gobblin/blob/master/gobblin-scheduler/src/main/java/gobblin/scheduler/SchedulerDaemon.java), which can be run using the Gobblin [`bin/gobblin-standalone.sh](https://github.com/linkedin/gobblin/blob/master/bin/gobblin-standalone.sh) script.
+
+So in order to take advantage of the Quartz scheduler two steps need to be taken:
+* Use the `bin/gobblin-standalone.sh` script
+* Add the property `job.schedule` to the `.pull` file
+    * The value for this property should be a [CRONTrigger](http://quartz-scheduler.org/api/2.2.0/org/quartz/CronTrigger.html)
+
+# Azkaban
+
+Gobblin can be launched via [Azkaban](https://azkaban.github.io/), and open-source Workflow Manager for scheduling and launching Hadoop jobs. Gobblin's [`AzkabanJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-azkaban/src/main/java/gobblin/azkaban/AzkabanJobLauncher.java) can be used to launch a Gobblin job through Azkaban.
+
+One has to follow the typical setup to create a zip file that can be uploaded to Azkaban (it should include all dependent jars, which can be found in `gobblin-dist.tar.gz`). The `.job` file for the Azkaban Job should contain all configuration properties that would be put in a `.pull` file (for example, the [Wikipedia Example](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull) `.pull` file). All Gobblin system dependent properties (e.g. [`conf/gobblin-mapreduce.properties`](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-mapreduce.properties) or [`conf/gobblin-standalone.properties`](https://github.com/linkedin/gobblin/blob/master/conf/gobblin-standalone.properties)) should also be in the zip file.
+
+In the Azkaban `.job` file, the `type` parameter should be set to `hadoopJava` (see [here](http://azkaban.github.io/azkaban/docs/latest/#hadoopjava-type) for more information about the `hadoopJava` Job Type). The `job.class` parameter should be set to `gobblin.azkaban.AzkabanJobLauncher`.
+
+# Oozie
+
+[Oozie](https://oozie.apache.org/) is a very popular scheduler for the Hadoop environment. It allows users to define complex workflows using XML files. A workflow can be composed of a series of actions, such as Java Jobs, Pig Jobs, Spark Jobs, etc. Gobblin has two integration points with Oozie. It can be run as a stand-alone Java process via Oozie's `<java>` tag, or it can be run as an Map Reduce job via Oozie.
+
+The following guides assume Oozie is already setup and running on some machine, if this is not the case consult the Oozie documentation for getting everything setup.
+
+These tutorial only outline how to launch a basic Oozie job that simply runs a Gobblin java a single time. For information on how to build more complex flows, and how to run jobs on a schedule, check out the Oozie documentation online.
+
+### Launching Gobblin in Local Mode
+
+This guide focuses on getting Gobblin to run in as a stand alone Java Process. This means it will not launch a separate MR job to distribute its workload. It is important to understand how the current version of Oozie will launch a Java process. It will first start an MapReduce job and will run the Gobblin as a Java process inside a single map task. The Gobblin job will then ingest all data it is configured to pull and then it will shutdown.
+
+#### Example Config Files
+
+[`gobblin-oozie/src/main/resources/`](https://github.com/linkedin/gobblin/tree/master/gobblin-oozie/src/main/resources/) contains sample configuration files for launching Gobblin Oozie. There are a number of important files in this directory:
+
+`gobblin-oozie-example-system.properties` contains default system level properties for Gobblin. When launched with Oozie, Gobblin will run inside a map task; it is thus recommended to configure Gobblin to write directly to HDFS rather than the local file system. The property `fs.uri` in this file should be changed to point to the NameNode of the Hadoop File System the job should write to. By default, all data is written under a folder called `gobblin-out`; to change this modify the `gobblin.work.dir` parameter in this file.
+
+`gobblin-oozie-example-workflow.properties` contains default Oozie properties for any job launched. It is also the entry point for launching an Oozie job (e.g. to launch an Oozie job from the command line you execute `oozie job -config gobblin-oozie-example-workflow.properties -run`). In this file one needs to update the `name.node` and `resource.manager` to the values specific to their environment. Another important property in this file is `oozie.wf.application.path`; it points to a folder on HDFS that contains any workflows to be run. It is important to note, that the `workflow.xml` files must be on HDFS in order for Oozie to pick them up (this is because Oozie typically runs on a separate machine as any client process).
+
+`gobblin-oozie-example-workflow.xml` contains an example Oozie workflow. This example simply launches a Java process that invokes the main method of the [`CliLocalJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/local/CliLocalJobLauncher.java). The main method of this class expects two file paths to be passed to it (once again these files need to be on HDFS). The `jobconfig` arg should point to a file on HDFS containing all job configuration parameters. An example `jobconfig` file can be found [here](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull). The `sysconfig` arg should point to a file on HDFS containing all system configuration parameters. An example `sysconfig` file for Oozie can be found [here](https://github.com/linkedin/gobblin/blob/master/gobblin-oozie/src/main/resources/local/gobblin-oozie-example-system.properties).
+
+<!---Ying Do you think we can add some descriptions about launching through MR mode? The simplest way is to use the <shell> tag and invoke `gobblin-mapreduce.sh`. I've tested it before.-->
+
+#### Uploading Files to HDFS
+
+Oozie only reads a job properties file from the local file system (e.g. `gobblin-oozie-example-workflow.properties`), it expects all other configuration and dependent files to be uploaded to HDFS. Specifically, it looks for these files under the directory specified by `oozie.wf.application.path` Make sure this is the case before trying to launch an Oozie job.
+
+##### Adding Gobblin `jar` Dependencies
+
+Gobblin has a number of `jar` dependencies that need to be used when launching a Gobblin job. These dependencies can be taken from the `gobblin-dist.tar.gz` file that is created after building Gobblin. The tarball should contain a `lib` folder will the necessary dependencies. This folder should be placed into a `lib` folder under the same same directory specified by `oozie.wf.application.path` in the `gobblin-oozie-example-workflow.properties` file.
+
+#### Launching the Job
+
+Assuming one has the [Oozie CLI](https://oozie.apache.org/docs/3.1.3-incubating/DG_CommandLineTool.html) installed, the job can be launched using the following command: `oozie job -config gobblin-oozie-example-workflow.properties -run`.
+
+#### Debugging Tips
+
+Once the job has been launched, its status can be queried via the following command: `oozie job -info <oozie-job-id>` and the logs can be shown via the following command `oozie job -log <oozie-job-id>`.
+
+In order to get see the standard output of Gobblin, one needs to check the logs the Map task running the Gobblin process. `oozie job -info <oozie-job-id>` should show the Hadoop `job_id` of the Hadoop Job launched to run the Gobblin process. Using this id one should be able to find the logs of the Map tasks through the UI or other command line tools (e.g. `yarn logs`).
\ No newline at end of file
diff --git a/Gobblin-on-Yarn.md b/Gobblin-on-Yarn.md
new file mode 100644
index 0000000..99d82d2
--- /dev/null
+++ b/Gobblin-on-Yarn.md
@@ -0,0 +1,321 @@
+Table of Contents
+--------------------
+* [1. Introduction](#introduction)
+* [2. Architecture](#architecture)
+  * [1.1. Overview](#overview)
+  * [1.2. The Role of Apache Helix](#the-role-of-apache-helix)
+  * [1.3. Gobblin Yarn Application Launcher](#gobblin-yarn-application-launcher)
+  * [1.4. Gobblin ApplicationMaster](#gobblin-applicationmaster)
+  * [1.5. Gobblin WorkUnitRunner](#gobblin-workunitrunner)
+  * [1.6. Failure Handling](#failure-handling)
+* [3. Log Aggregation](#log-aggregation)
+* [4. Security and Delegation Token Management](#security-and-delegation-token-management)
+* [5. Configuration](#configuration)
+  * [5.1. Configuration Properties](#configuration-properties)
+  * [5.2. Configuration System](#configuration-system)
+* [6. Deployment](#deployment)
+  * [6.1. Deployment on a Unsecured Yarn Cluster](#deployment-on-a-unsecured-yarn-cluster)
+  * [6.2. Deployment on a Secured Yarn Cluster](#deployment-on-a-secured-yarn-cluster)
+  * [6.3. Supporting Existing Gobblin Jobs](#supporting-existing-gobblin-jobs)
+* [7. Monitoring](#monitoring)
+
+## Introduction
+
+Gobblin currently is capable of running in the standalone mode on a single machine or in the MapReduce (MR) mode as a MR job on a Hadoop cluster. A Gobblin job is typically running on a schedule through a scheduler, e.g., the built-in `JobScheduler`, Azkaban, or Oozie, and each job run ingests new data or data updated since the last run. So this is essentially a batch model for data ingestion and how soon new data becomes available on Hadoop depends on the schedule of the job. 
+
+On another aspect, for high data volume data sources such as Kafka, Gobblin typically runs in the MR mode with a considerable number of tasks running in the mappers of a MR job. This helps Gobblin to scale out for data sources with large volumes of data. The MR mode, however, suffers from problems such as large overhead mostly due to the overhead of submitting and launching a MR job and poor cluster resource usage. The MR mode is also fundamentally not appropriate for real-time data ingestion given its batch nature. These deficiencies are summarized in more details below:
+
+* In the MR mode, every Gobblin job run starts a new MR job, which costs a considerable amount of time to allocate and start the containers for running the mapper/reducer tasks. This cost can be totally eliminated if the containers are already up and running.
+* Each Gobblin job running in the MR mode requests a new set of containers and releases them upon job completion. So it's impossible for two jobs to share the containers even though the containers are perfectly capable of running tasks of both jobs.
+* In the MR mode, All `WorkUnit`s are pre-assigned to the mappers before launching the MR job. The assignment is fixed by evenly distributing the `WorkUnit`s to the mappers so each mapper gets a fair share of the work in terms of the _number of `WorkUnits`_. However, an evenly distributed number of `WorkUnit`s per mapper does not always guarantee a fair share of the work in terms of the volume of data to pull. This, combined with the fact that the mappers that finish earlier cannot "steal" `WorkUnit`s assigned to other mappers, means the responsibility of load balancing is on the `Source` implementations, which is not trivial to do, and is virtually impossible in heterogeneous Hadoop clusters where different nodes have different capacity. This also means the duration of a job is determined by the slowest mapper.
+* A MR job can only hold its containers for a limited of time, beyond which the job may get killed. Real-time data ingestion, however, requires the ingestion tasks to be running all the time or alternatively dividing a continuous data stream into well-defined mini-batches (as in Spark Streaming) that can be promptly executed once created. Both require long-running containers, which are not supported in the MR mode. 
+
+Those deficiencies motivated the work on making Gobblin run on Yarn as a native Yarn application. Running Gobblin as a native Yarn application allows much more control over container provisioning and lifecycle management so it's possible to keep the containers running continuously. It also makes it possible to dynamically change the number of containers at runtime depending on the load to further improve the resource efficiency, something that's impossible in the MR mode.         
+
+This wiki page documents the design and architecture of the native Gobblin Yarn application and some implementation details. It also covers the configuration system and properties for the application, as well as deployment settings on both unsecured and secured Yarn clusters. 
+
+## Architecture
+
+### Overview
+
+The architecture of Gobblin on Yarn is illustrated in the following diagram. In addition to Yarn, Gobblin on Yarn also leverages [Apache Helix](http://helix.apache.org/), whose role is discussed in [The Role of Apache Helix](#the-role-of-apache-helix). A Gobblin Yarn application consists of three components: the Yarn Application Launcher, the Yarn ApplicationMaster (serving as the Helix _controller_), and the Yarn WorkUnitRunner (serving as the Helix _participant_). The following sections describe each component in details.
+
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-on-Yarn-with-Helix.png alt="Gobblin Image" width="800">
+  </figure>
+</p>
+
+### The Role of Apache Helix
+
+[Apache Helix](http://helix.apache.org/) is mainly used for managing the cluster of containers and running the `WorkUnit`s through its [Distributed Task Execution Framework](http://helix.apache.org/0.7.1-docs/recipes/task_dag_execution.html). 
+
+The assignment of tasks to available containers (or participants in Helix's term) is handled by Helix through a finite state model named the `TaskStateModel`. Using this `TaskStateModel`, Helix is also able to do task rebalancing in case new containers get added or some existing containers die. Clients can also choose to force a task rebalancing if some tasks take much longer time than the others. 
+
+Helix also supports a way of doing messaging between different components of a cluster, e.g., between the controller to the participants, or between the client and the controller. The Gobblin Yarn application uses this messaging mechanism to implement graceful shutdown initiated by the client as well as delegation token renew notifications from the client to the ApplicationMaster and the WorkUnitRunner containers.
+
+Heiix relies on ZooKeeper for its operations, and particularly for maintaining the state of the cluster and the resources (tasks in this case). Both the Helix controller and participants connect to ZooKeeper during their entire lifetime. The ApplicationMaster serves as the Helix controller and the worker containers serve as the Helix participants, respectively, as discussed in details below.  
+
+### Gobblin Yarn Application Launcher
+
+The Gobblin Yarn Application Launcher (implemented by class [`GobblinYarnAppLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinYarnAppLauncher.java)) is the client/driver of a Gobblin Yarn application. The first thing the `GobblinYarnAppLauncher` does when it starts is to register itself with Helix as a _spectator_ and creates a new Helix cluster with name specified through the configuration property `gobblin.yarn.helix.cluster.name`, if no cluster with the name exists. 
+
+The `GobblinYarnAppLauncher` then sets up the Gobblin Yarn application and submits it to run on Yarn. Once the Yarn application successfully starts running, it starts an application state monitor that periodically checks the state of the Gobblin Yarn application. If the state is one of the exit states (`FINISHED`, `FAILED`, or `KILLED`), the `GobblinYarnAppLauncher` shuts down itself. 
+
+Upon successfully submitting the application to run on Yarn, the `GobblinYarnAppLauncher` also starts a `ServiceManager` that manages the following services that auxiliate the running of the application:
+
+#### `YarnAppSecurityManager`
+
+The [`YarnAppSecurityManager`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/YarnAppSecurityManager.java) works with the [`YarnContainerSecurityManager`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/YarnContainerSecurityManager.java) running in the ApplicationMaster and the WorkUnitRunner for a complete solution for security and delegation token management. The `YarnAppSecurityManager` is responsible for periodically logging in through a Kerberos keytab and getting the delegation token refreshed regularly after each login. Each time the delegation token is refreshed, the `YarnContainerSecurityManager` writes the new token to a file on HDFS and sends a message to the ApplicationMaster and each WorkUnitRunner, notifying them the refresh of the delegation token. Checkout [`YarnContainerSecurityManager`](#yarncontainersecuritymanager) on how the other side of this system works.
+
+#### `LogCopier`
+
+The service [`LogCopier`](https://github.com/linkedin/gobblin/blob/master/gobblin-utility/src/main/java/gobblin/util/logs/LogCopier.java) in `GobblinYarnAppLauncher` streams the ApplicationMaster and WorkUnitRunner logs in near real-time from the central location on HDFS where the logs are streamed to from the ApplicationMaster and WorkUnitRunner containers, to the local directory specified through the configuration property `gobblin.yarn.logs.sink.root.dir` on the machine where the `GobblinYarnAppLauncher` runs. More details on this can be found in [Log Aggregation](#log-aggregation).
+
+### Gobblin ApplicationMaster
+
+The ApplicationMaster process runs the [`GobblinApplicationMaster`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinApplicationMaster.java), which uses a `ServiceManager` to manage the services supporting the operation of the ApplicationMaster process. The services running in `GobblinApplicationMaster` will be discussed later. When it starts, the first thing `GobblinApplicationMaster` does is to connect to ZooKeeper and register itself as a Helix _controller_. It then starts the `ServiceManager`, which in turn starts the services it manages, as described below. 
+
+#### `YarnService`
+
+The service [`YarnService`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/YarnService.java) handles all Yarn-related task including the following:
+
+* Registering and un-registering the ApplicationMaster with the Yarn ResourceManager.
+* Requesting the initial set of containers from the Yarn ResourceManager.
+* Handling any container changes at runtime, e.g., adding more containers or shutting down containers no longer needed. This also includes stopping running containers when the application is asked to stop.
+
+This design makes it switch to a different resource manager, e.g., Mesos, by replacing the service `YarnService` with something else specific to the resource manager, e.g., `MesosService`.
+
+#### `GobblinHelixJobScheduler`
+
+[`GobblinApplicationMaster`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinApplicationMaster.java) runs the [`GobblinHelixJobScheduler`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinHelixJobScheduler.java) that schedules jobs to run through the Helix [Distributed Task Execution Framework](http://helix.apache.org/0.7.1-docs/recipes/task_dag_execution.html). For each Gobblin job run, the `GobblinHelixJobScheduler` starts a [`GobblinHelixJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinHelixJobLauncher.java) that wraps the Gobblin job into a [`GobblinHelixJob`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinHelixJob.java) and each Gobblin `Task` into a [`GobblinHelixTask`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinHelixTask.java), which implements the Helix's `Task` interface so Helix knows how to execute it. The `GobblinHelixJobLauncher` then submits the job to a Helix job queue named after the Gobblin job name, from which the Helix Distributed Task Execution Framework picks up the job and runs its tasks through the live participants (available containers).
+
+Like the [`LocalJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/local/LocalJobLauncher.java) and [`MRJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/mapreduce/MRJobLauncher.java), the `GobblinHelixJobLauncher` handles output data commit and job state persistence.   
+
+#### `LogCopier`
+
+The service [`LogCopier`](https://github.com/linkedin/gobblin/blob/master/gobblin-utility/src/main/java/gobblin/util/logs/LogCopier.java) in `GobblinApplicationMaster` streams the ApplicationMaster logs in near real-time from the machine running the ApplicationMaster container to a central location on HDFS so the logs can be accessed at runtime. More details on this can be found in [Log Aggregation](#log-aggregation).
+
+#### `YarnContainerSecurityManager`
+
+The [`YarnContainerSecurityManager`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/YarnContainerSecurityManager.java) runs in both the ApplicationMaster and the WorkUnitRunner. When it starts, it registers a message handler with the `HelixManager` for handling messages on refreshes of the delegation token. Once such a message is received, the `YarnContainerSecurityManager` gets the path to the token file on HDFS from the message, and updated the the current login user with the new token read from the file.
+
+### Gobblin WorkUnitRunner
+
+The WorkUnitRunner process runs the [`GobblinWorkUnitRunner`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinWorkUnitRunner.java), which uses a `ServiceManager` to manage the services supporting the operation of the WorkUnitRunner process. The services running in `GobblinWorkUnitRunner` will be discussed later. When it starts, the first thing `GobblinWorkUnitRunner` does is to connect to ZooKeeper and register itself as a Helix _participant_. It then starts the `ServiceManager`, which in turn starts the services it manages, as discussed below. 
+
+#### `TaskExecutor`
+
+The [`TaskExecutor`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/TaskExecutor.java) remains the same as in the standalone and MR modes, and is purely responsible for running tasks assigned to a WorkUnitRunner. 
+
+#### `GobblinHelixTaskStateTracker`
+
+The [`GobblinHelixTaskStateTracker`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/GobblinHelixTaskStateTracker.java) has a similar responsibility as the `LocalTaskStateTracker` and `MRTaskStateTracker`: keeping track of the state of running tasks including operational metrics, e.g., total records pulled, records pulled per second, total bytes pulled, bytes pulled per second, etc.
+
+#### `LogCopier`
+
+The service [`LogCopier`](https://github.com/linkedin/gobblin/blob/master/gobblin-utility/src/main/java/gobblin/util/logs/LogCopier.java) in `GobblinWorkUnitRunner` streams the WorkUnitRunner logs in near real-time from the machine running the WorkUnitRunner container to a central location on HDFS so the logs can be accessed at runtime. More details on this can be found in [Log Aggregation](#log-aggregation).
+
+#### `YarnContainerSecurityManager`
+
+The [`YarnContainerSecurityManager`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/YarnContainerSecurityManager.java) in `GobblinWorkUnitRunner` works in the same way as it in `GobblinApplicationMaster`. 
+
+### Failure Handling
+
+#### ApplicationMaster Failure Handling
+
+Under normal operation, the Gobblin ApplicationMaster stays alive unless being asked to stop through a message sent from the launcher (the `GobblinYarnAppLauncher`) as part of the orderly shutdown process. It may, however, fail or get killed by the Yarn ResourceManager for various reasons. For example, the container running the ApplicationMaster may fail and exit due to node failures, or get killed because of using more memory than claimed. When a shutdown of the ApplicationMaster is triggered (e.g., when the shutdown hook is triggered) for any reason, it does so gracefully, i.e., it attempts to stop every services it manages, stop all the running containers, and unregister itself with the ResourceManager. Shutting down the ApplicationMaster shuts down the Yarn application and the application launcher will eventually know that the application completes through a periodic check on the application status. 
+
+#### Container Failure Handling
+
+Under normal operation, a Gobblin Yarn container stays alive unless being released and stopped by the Gobblin ApplicationMaster, and in this case the exit status of the container will be zero. However, a container may exit unexpectedly due to various reasons. For example, a container may fail and exit due to node failures, or be killed because of using more memory than claimed. In this case when a container exits abnormally with a non-zero exit code, Gobblin Yarn tries to restart the Helix instance running in the container by requesting a new Yarn container as a replacement to run the instance. The maximum number of retries can be configured through the key `gobblin.yarn.helix.instance.max.retries`.
+
+When requesting a new container to replace the one that completes and exits abnormally, the application has a choice of specifying the same host that runs the completed container as the preferred host, depending on the boolean value of configuration key `gobblin.yarn.container.affinity.enabled`. Note that for certain exit codes that indicate something wrong with the host, the value of `gobblin.yarn.container.affinity.enabled` is ignored and no preferred host gets specified, leaving Yarn to figure out a good candidate node for the new container.     
+
+#### Handling Failures to get `ApplicationReport`
+
+As mentioned above, once the Gobblin Yarn application successfully starts running, the `GobblinYarnAppLauncher` starts an application state monitor that periodically checks the state of the Yarn application by getting an `ApplicationReport`. It may fail to do so and throw an exception, however, if the Yarn client is having some problem connecting and communicating with the Yarn cluster. For example, if the Yarn cluster is down for maintenance, the Yarn client will not be able to get an `ApplicationReport`. The `GobblinYarnAppLauncher` keeps track of the number of consecutive failures to get an `ApplicationReport` and initiates a shutdown if this number exceeds the threshold as specified through the configuration property `gobblin.yarn.max.get.app.report.failures`. The shutdown will trigger an email notification if the configuration property `gobblin.yarn.email.notification.on.shutdown` is set to `true`.
+
+## Log Aggregation
+
+Yarn provides both a Web UI and a command-line tool to access the logs of an application, and also does log aggregation so the logs of all the containers become available on the client side upon requested. However, there are a few limitations that make it hard to access the logs of an application at runtime:
+
+* The command-line utility for downloading the aggregated logs will only be able to do so after the application finishes, making it useless for getting access to the logs at the application runtime.  
+* The Web UI does allow logs to be viewed at runtime, but only when the user that access the UI is the same as the user that launches the application. On a Yarn cluster where security is enabled, the user launching the Gobblin Yarn application is typically a user of some headless account.
+
+Because Gobblin runs on Yarn as a long-running native Yarn application, getting access to the logs at runtime is critical to know what's going on in the application and to detect any issues in the application as early as possible. Unfortunately we cannot use the log facility provided by Yarn here due to the above limitations. Alternatively, Gobblin on Yarn has its own mechanism for doing log aggregation and providing access to the logs at runtime, described as follows.
+
+Both the Gobblin ApplicationMaster and WorkUnitRunner run a `LogCopier` that periodically copies new entries of both `stdout` and `stderr` logs of the corresponding processes from the containers to a central location on HDFS under the directory `${gobblin.yarn.work.dir}/_applogs` in the subdirectories named after the container IDs, one per container. The names of the log files on HDFS combine the container IDs and the original log file names so it's easy to tell which container generates which log file. More specifically, the log files produced by the ApplicationMaster are named `<container id>.GobblinApplicationMaster.{stdout,stderr}`, and the log files produced by the WorkUnitRunner are named `<container id>.GobblinWorkUnitRunner.{stdout,stderr}`.
+
+The Gobblin YarnApplicationLauncher also runs a `LogCopier` that periodically copies new log entries from log files under `${gobblin.yarn.work.dir}/_applogs` on HDFS to the local filesystem under the directory configured by the property `gobblin.yarn.logs.sink.root.dir`. By default, the `LogCopier` checks for new log entries every 60 seconds and will keep reading new log entries until it reaches the end of the log file. This setup enables the Gobblin Yarn application to stream container process logs near real-time all the way to the client/driver. 
+
+## Security and Delegation Token Management
+
+On a Yarn cluster with security enabled (e.g., Kerberos authentication is required to access HDFS), security and delegation token management is necessary to allow Gobblin run as a long-running Yarn application. Specifically, Gobblin running on a secured Yarn cluster needs to get its delegation token for accessing HDFS renewed periodically, which also requires periodic keytab re-logins because a delegation token can only be renewed up to a limited number of times in one login.
+
+The Gobblin Yarn application supports Kerberos-based authentication and login through a keytab file. The `YarnAppSecurityManager` running in the Yarn Application Launcher and the `YarnContainerSecurityManager` running in the ApplicationMaster and WorkUnitRunner work together to get every Yarn containers updated whenever the delegation token gets updated on the client side by the `YarnAppSecurityManager`. More specifically, the `YarnAppSecurityManager` periodically logins through the keytab and gets the delegation token refreshed regularly after each successful login. Every time the `YarnAppSecurityManager` refreshes the delegation token, the `YarnContainerSecurityManager` writes the new token to a file on HDFS and sends a `TOKEN_FILE_UPDATED` message to the ApplicationMaster and each WorkUnitRunner, notifying them the refresh of the delegation token. Upon receiving such a message, the `YarnContainerSecurityManager` running in the ApplicationMaster or WorkUnitRunner gets the path to the token file on HDFS from the message, and updated the the current login user with the new token read from the file.
+
+Both the interval between two Kerberos keytab logins and the interval between two delegation token refreshes are configurable, through the configuration properties `gobblin.yarn.login.interval.minutes` and `gobblin.yarn.token.renew.interval.minutes`, respectively.    
+
+## Configuration
+
+### Configuration Properties
+
+In additional to the common Gobblin configuration properties, documented in [`Configuration Properties Glossary`](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary), Gobblin on Yarn uses the following configuration properties. 
+
+|Property|Default Value|Description|
+|-------------|-------------|-------------|
+|`gobblin.yarn.app.name`|`GobblinYarn`|The Gobblin Yarn appliation name.|
+|`gobblin.yarn.app.queue`|`default`|The Yarn queue the Gobblin Yarn application will run in.|
+|`gobblin.yarn.work.dir`|`/gobblin`|The working directory (typically on HDFS) for the Gobblin Yarn application.|
+|`gobblin.yarn.app.report.interval.minutes`|5|The interval in minutes between two Gobblin Yarn application status reports.|
+|`gobblin.yarn.max.get.app.report.failures`|4|Maximum allowed number of consecutive failures to get a Yarn `ApplicationReport`.|
+|`gobblin.yarn.email.notification.on.shutdown`|`false`|Whether email notification is enabled or not on shutdown of the `GobblinYarnAppLauncher`. If this is set to `true`, the following configuration properties also need to be set for email notification to work: `email.host`, `email.smtp.port`, `email.user`, `email.password`, `email.from`, and `email.tos`. Refer to [Email Alert Properties](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary#Email-Alert-Properties) for more information on those configuration properties.|
+|`gobblin.yarn.app.master.memory.mbs`|512|How much memory in MBs to request for the container running the Gobblin ApplicationMaster.|
+|`gobblin.yarn.app.master.cores`|1|The number of vcores to request for the container running the Gobblin ApplicationMaster.|
+|`gobblin.yarn.app.master.jars`||A comma-separated list of jars the Gobblin ApplicationMaster depends on but not in the `lib` directory.|
+|`gobblin.yarn.app.master.files.local`||A comma-separated list of files on the local filesystem the Gobblin ApplicationMaster depends on.|
+|`gobblin.yarn.app.master.files.remote`||A comma-separated list of files on a remote filesystem (typically HDFS) the Gobblin ApplicationMaster depends on.|
+|`gobblin.yarn.app.master.jvm.args`||Additional JVM arguments for the JVM process running the Gobblin ApplicationMaster, e.g., `-XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m -Dconfig.trace=loads`.|
+|`gobblin.yarn.initial.containers`|1|The number of containers to request initially when the application starts to run the WorkUnitRunner.|
+|`gobblin.yarn.container.memory.mbs`|512|How much memory in MBs to request for the container running the Gobblin WorkUnitRunner.|
+|`gobblin.yarn.container.cores`|1|The number of vcores to request for the container running the Gobblin WorkUnitRunner.|
+|`gobblin.yarn.container.jars`||A comma-separated list of jars the Gobblin WorkUnitRunner depends on but not in the `lib` directory.|
+|`gobblin.yarn.container.files.local`||A comma-separated list of files on the local filesystem the Gobblin WorkUnitRunner depends on.|
+|`gobblin.yarn.container.files.remote`||A comma-separated list of files on a remote filesystem (typically HDFS) the Gobblin WorkUnitRunner depends on.|
+|`gobblin.yarn.container.jvm.args`||Additional JVM arguments for the JVM process running the Gobblin WorkUnitRunner, e.g., `-XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m -Dconfig.trace=loads`.|
+|`gobblin.yarn.container.affinity.enabled`|`true`|Whether the same host should be used as the preferred host when requesting a replacement container for the one that exits.|
+|`gobblin.yarn.helix.cluster.name`|`GobblinYarn`|The name of the Helix cluster that will be registered with ZooKeeper.|
+|`gobblin.yarn.zk.connection.string`|`localhost:2181`|The ZooKeeper connection string used by Helix.|
+|`helix.instance.max.retries`|2|Maximum number of times the application tries to restart a failed Helix instance (corresponding to a Yarn container).|
+|`gobblin.yarn.lib.jars.dir`||The directory where library jars are stored, typically `gobblin-dist/lib`.|
+|`gobblin.yarn.job.conf.path`||The path to either a directory where Gobblin job configuration files are stored or a single job configuration file. Internally Gobblin Yarn will package the configuration files as a tarball so you don't need to.|
+|`gobblin.yarn.logs.sink.root.dir`||The directory on local filesystem on the driver/client side where the aggregated container logs of both the ApplicationMaster and WorkUnitRunner are stored.|
+|`gobblin.yarn.keytab.file.path`||The path to the Kerberos keytab file used for keytab-based authentication/login.|
+|`gobblin.yarn.keytab.principal.name`||The principal name of the keytab.|
+|`gobblin.yarn.login.interval.minutes`|1440|The interval in minutes between two keytab logins.|
+|`gobblin.yarn.token.renew.interval.minutes`|720|The interval in minutes between two delegation token renews.|
+
+### Configuration System
+
+The Gobblin Yarn application uses the [Typesafe Config](https://github.com/typesafehub/config) library to handle the application configuration. Following [Typesafe Config](https://github.com/typesafehub/config)'s model, the Gobblin Yarn application uses a single file named `application.conf` for all configuration properties and another file named `reference.conf` for default values. A sample `application.conf` is shown below: 
+```
+# Yarn/Helix configuration properties
+gobblin.yarn.helix.cluster.name=GobblinYarnTest
+gobblin.yarn.app.name=GobblinYarnTest
+gobblin.yarn.lib.jars.dir="/home/gobblin/gobblin-dist/lib/"
+gobblin.yarn.app.master.files.local="/home/gobblin/gobblin-dist/conf/log4j-yarn.properties,/home/gobblin/gobblin-dist/conf/application.conf,/home/gobblin/gobblin-dist/conf/reference.conf"
+gobblin.yarn.container.files.local=${gobblin.yarn.app.master.files.local}
+gobblin.yarn.job.conf.path="/home/gobblin/gobblin-dist/job-conf"
+gobblin.yarn.keytab.file.path="/home/gobblin/gobblin.headless.keytab"
+gobblin.yarn.keytab.principal.name=gobblin
+gobblin.yarn.app.master.jvm.args="-XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m"
+gobblin.yarn.container.jvm.args="-XX:ReservedCodeCacheSize=100M -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m"
+gobblin.yarn.logs.sink.root.dir=/home/gobblin/gobblin-dist/applogs
+
+# File system URIs
+writer.fs.uri=${fs.uri}
+state.store.fs.uri=${fs.uri}
+
+# Writer related configuration properties
+writer.destination.type=HDFS
+writer.output.format=AVRO
+writer.staging.dir=${gobblin.yarn.work.dir}/task-staging
+writer.output.dir=${gobblin.yarn.work.dir}/task-output
+
+# Data publisher related configuration properties
+data.publisher.type=gobblin.publisher.BaseDataPublisher
+data.publisher.final.dir=${gobblin.yarn.work.dir}/job-output
+data.publisher.replace.final.dir=false
+
+# Directory where job/task state files are stored
+state.store.dir=${gobblin.yarn.work.dir}/state-store
+
+# Directory where error files from the quality checkers are stored
+qualitychecker.row.err.file=${gobblin.yarn.work.dir}/err
+
+# Disable job locking for now
+job.lock.enabled=false
+
+# Directory where job locks are stored
+job.lock.dir=${gobblin.yarn.work.dir}/locks
+
+# Directory where metrics log files are stored
+metrics.log.dir=${gobblin.yarn.work.dir}/metrics
+```
+
+A sample `reference.conf` is shown below:
+
+```
+# Yarn/Helix configuration properties
+gobblin.yarn.app.queue=default
+gobblin.yarn.helix.cluster.name=GobblinYarn
+gobblin.yarn.app.name=GobblinYarn
+gobblin.yarn.app.master.memory.mbs=512
+gobblin.yarn.app.master.cores=1
+gobblin.yarn.app.report.interval.minutes=5
+gobblin.yarn.max.get.app.report.failures=4
+gobblin.yarn.email.notification.on.shutdown=false
+gobblin.yarn.initial.containers=1
+gobblin.yarn.container.memory.mbs=512
+gobblin.yarn.container.cores=1
+gobblin.yarn.container.affinity.enabled=true
+gobblin.yarn.helix.instance.max.retries=2
+gobblin.yarn.keytab.login.interval.minutes=1440
+gobblin.yarn.token.renew.interval.minutes=720
+gobblin.yarn.work.dir=/user/gobblin/gobblin-yarn
+gobblin.yarn.zk.connection.string="localhost:2181"
+
+fs.uri="hdfs://localhost:9000"
+```
+## Deployment
+
+A standard deployment of Gobblin on Yarn requires a Yarn cluster running Hadoop 2.x (`2.3.0` and above recommended) and a ZooKeeper cluster. Make sure the client machine (typically the gateway of the Yarn cluster) is able to access the ZooKeeper instance. 
+
+### Deployment on a Unsecured Yarn Cluster
+
+To do a deployment of the Gobblin Yarn application, first build Gobblin using the following command from the root directory of the Gobblin project. Gobblin on Yarn requires Hadoop 2.x, so make sure `-PuseHadoop2` is used.
+
+```
+./gradlew clean build -PuseHadoop2
+```
+
+To build Gobblin against a specific version of Hadoop 2.x, e.g., `2.7.0`, run the following command instead:
+
+```
+./gradlew clean build -PuseHadoop2 -PhadoopVersion=2.7.0
+```
+ 
+After Gobblin is successfully built, a tarball named `gobblin-dist-[project-version].tar.gz` should have been created under the root directory of the project. To deploy the Gobblin Yarn application on a unsecured Yarn cluster, uncompress the tarball somewhere and run the following commands:  
+
+```
+cd gobblin-dist
+bin/gobblin-yarn.sh
+```
+
+Note that for the above commands to work, the Hadoop/Yarn configuration directory must be on the classpath and the configuration must be pointing to the right Yarn cluster, or specifically the right ResourceManager and NameNode URLs. This is defined like the following in `gobblin-yarn.sh`:
+
+```
+CLASSPATH=${FWDIR_CONF}:${GOBBLIN_JARS}:${YARN_CONF_DIR}:${HADOOP_YARN_HOME}/lib
+```
+
+### Deployment on a Secured Yarn Cluster
+
+When deploying the Gobblin Yarn application on a secured Yarn cluster, make sure the keytab file path is correctly specified in `application.conf` and the correct principal for the keytab is used as follows. The rest of the deployment is the same as that on a unsecured Yarn cluster.
+
+```
+gobblin.yarn.keytab.file.path="/home/gobblin/gobblin.headless.keytab"
+gobblin.yarn.keytab.principal.name=gobblin
+```
+
+### Supporting Existing Gobblin Jobs
+
+Gobblin on Yarn is backward compatible and supports existing Gobblin jobs running in the standalone and MR modes. To run existing Gobblin jobs, simply put the job configuration files into a directory on the local file system of the driver and setting the configuration property `gobblin.yarn.job.conf.path` to point to the directory. When the Gobblin Yarn application starts, Yarn will package the configuration files as a tarball and make sure the tarball gets copied to the ApplicationMaster and properly uncompressed. The `GobblinHelixJobScheduler` then loads the job configuration files and schedule the jobs to run.
+
+## Monitoring
+
+Gobblin Yarn uses the [Gobblin Metrics](https://github.com/linkedin/gobblin/wiki/Gobblin%20Metrics) library for collecting and reporting metrics at the container, job, and task levels. Each `GobblinWorkUnitRunner` maintains a [`ContainerMetrics`](https://github.com/linkedin/gobblin/blob/master/gobblin-yarn/src/main/java/gobblin/yarn/ContainerMetrics.java) that is the parent of the [`JobMetrics`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/util/JobMetrics.java) of each job run the container is involved, which is the parent of the [`TaskMetrics`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/util/TaskMetrics.java) of each task of the job run. This hierarchical structure allows us to do pre-aggregation in the containers before reporting the metrics to the backend. 
+
+Collected metrics can be reported to various sinks such as Kafka, files, and JMX, depending on the configuration. Specifically, `metrics.enabled` controls whether metrics collecting and reporting are enabled or not. `metrics.reporting.kafka.enabled`, `metrics.reporting.file.enabled`, and `metrics.reporting.jmx.enabled` control whether collected metrics should be reported or not to Kafka, files, and JMX, respectively. Please refer to [Metrics Properties](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary#Metrics-Properties) for the available configuration properties related to metrics collecting and reporting.  
+
+In addition to metric collecting and reporting, Gobblin Yarn also supports writing job execution information to a MySQL-backed job execution history store, which keeps track of job execution information. Please refer to the [DDL](https://github.com/linkedin/gobblin/blob/master/gobblin-metastore/src/main/resources/gobblin_job_history_store.sql) for the relevant MySQL tables. Detailed information on the job execution history store including how to configure it can be found [here](https://github.com/linkedin/gobblin/wiki/Job%20Execution%20History%20Store). 
\ No newline at end of file
diff --git a/Home.md b/Home.md
new file mode 100644
index 0000000..d8e4207
--- /dev/null
+++ b/Home.md
@@ -0,0 +1,10 @@
+<p align="center"><img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-black.png alt="Gobblin Image" height="200"></p>
+
+Over the years, LinkedIn's data infrastructure team built custom solutions for ingesting diverse data entities into our Hadoop eco-system. At one point, we were running 15 types of ingestion pipelines which created significant data quality, metadata management, development, and operation challenges.
+
+Our experiences and challenges motivated us to build _Gobblin_. Gobblin is a universal data ingestion framework for extracting, transforming, and loading large volume of data from a variety of data sources, e.g., databases, rest APIs, FTP/SFTP servers, filers, etc., onto Hadoop. Gobblin handles the common routine tasks required for all data ingestion ETLs, including job/task scheduling, task partitioning, error handling, state management, data quality checking, data publishing, etc. Gobblin ingests data from different data sources in the same execution framework, and manages metadata of different sources all in one place. This, combined with other features such as auto scalability, fault tolerance, data quality assurance, extensibility, and the ability of handling data model evolution, makes Gobblin an easy-to-use, self-serving, and efficient data ingestion framework.
+
+You can find a lot of useful resources in our wiki pages, including [how to get started](https://github.com/linkedin/gobblin/wiki/Getting%20Started), [architecture overview](https://github.com/linkedin/gobblin/wiki/Gobblin-Architecture),
+[user guide](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment), [developer guide](https://github.com/linkedin/gobblin/wiki/Customization%20for%20New%20Source), and [project related information](https://github.com/linkedin/gobblin/wiki/Feature%20List). We also provide a discussion group: [Google Gobblin-Users Group](https://groups.google.com/forum/#!forum/gobblin-users). Please feel free to post any questions or comments.
+
+For a detailed overview, please take a look at the [VLDB 2015 paper](http://www.vldb.org/pvldb/vol8/p1764-qiao.pdf).
diff --git a/IDE-setup.md b/IDE-setup.md
new file mode 100644
index 0000000..c9eede3
--- /dev/null
+++ b/IDE-setup.md
@@ -0,0 +1,30 @@
+Table of Contents
+-----------------------------------------------
+- [Introduction](#introduction)
+- [IntelliJ Integration](#intellij-integration)
+- [Eclipse Integration](#eclipse-integration)
+- [Lombok](#lombok)
+
+# Introduction
+This document is for users who want to import the Gobblin code base into an [IDE](https://en.wikipedia.org/wiki/Integrated_development_environment) and directly modify that Gobblin code base. This is not for users who want to just setup Gobblin as a Maven dependency.
+
+# IntelliJ Integration
+Gobblin uses standard build tools to import code into an IntelliJ project. Execute the following command to build the necessary `*.iml` files:
+```
+./gradlew clean idea
+```
+Once the command finishes, use standard practices to import the project into IntelliJ.
+
+Make sure to include `-PuseHadoop2` in the above grade command, if you want to work in the gobblin-yarn directory.
+
+# Eclipse Integration
+Gobblin uses standard build tools to import code into an Eclipse project. Execute the following command to build the necessary `*.classpath` and `*.project` files:
+```
+./gradlew clean eclipse
+```
+Once the command finishes, use standard practices to import the project into Eclipse.
+
+Make sure to include `-PuseHadoop2` in the above grade command, if you want to work in the gobblin-yarn directory.
+
+# Lombok
+Gobblin uses [Lombok](https://projectlombok.org/) for reducing boilerplate code. Lombok auto generates boilerplate code at runtime if you are building gobblin from command line.If you are using an IDE, you will see compile errors in some of the classes that use Lombok. Please follow the [IDE setup instructions](https://projectlombok.org/download.html) for your IDE to setup lombok.
\ No newline at end of file
diff --git a/Implementing-New-Reporters.md b/Implementing-New-Reporters.md
new file mode 100644
index 0000000..7a445d0
--- /dev/null
+++ b/Implementing-New-Reporters.md
@@ -0,0 +1,99 @@
+The two best entry points for implementing custom reporters are [RecursiveScheduledMetricReporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/RecursiveScheduledMetricReporter.java) and [EventReporter](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/reporter/EventReporter.java). Each of these classes automatically schedules reporting, extracts the correct metrics, and calls a single method that must be implemented by the developer. These methods also implement builder patterns that can be extended by the developer.
+
+In the interest of giving more control to the users, metric and event reporters are kept separate, allowing users to more easily specify separate sinks for events and metrics. However, it is possible to implement a single report that handles both events and metrics.
+
+> It is recommended that each reporter has a constructor with signature `<init>(Properties)`. In the near future we are planning to implement auto-starting, file-configurable reporting similar to Log4j architecture, and compliant reporters will be required to have such a constructor.
+
+Extending Builders
+==================
+
+The builder patterns implemented in the base reporters are designed to be extendable. The architecture is a bit complicated, but a subclass of the base reporters wanting to use builder patterns should follow this pattern (replacing with RecursiveScheduledMetricReporter in the case of a metrics reporter):
+
+```java
+class MyReporter extends EventReporter {
+
+  private MyReporter(Builder<?> builder) throws IOException {
+    super(builder);
+    // Other initialization logic.
+  }
+
+  // Concrete implementation of extendable Builder.
+  public static class BuilderImpl extends Builder<BuilderImpl> {
+    private BuilderImpl(MetricContext context) {
+      super(context);
+    }
+
+    @Override
+    protected BuilderImpl self() {
+      return this;
+    }
+  }
+
+  public static class Factory {
+    /**
+     * Returns a new {@link MyReporter.Builder} for {@link MyReporter}.
+     * Will automatically add all Context tags to the reporter.
+     *
+     * @param context the {@link gobblin.metrics.MetricContext} to report
+     * @return MyReporter builder
+     */
+    public static BuilderImpl forContext(MetricContext context) {
+      return new BuilderImpl(context);
+    }
+  }
+
+  /**
+   * Builder for {@link MyReporter}.
+   */
+  public static abstract class Builder<T extends EventReporter.Builder<T>>
+      extends EventReporter.Builder<T> {
+
+    // Additional instance variables needed to construct MyReporter.
+    private int myBuilderVariable;
+
+    protected Builder(MetricContext context) {
+      super(context);
+      this.myBuilderVariable = 0;
+    }
+
+    /**
+     * Set myBuilderVariable.
+     */
+    public T withMyBuilderVariable(int value) {
+      this.myBuilderVariable = value;
+      return self();
+    }
+
+    // Other setters for Builder variables.
+
+    /**
+     * Builds and returns {@link MyReporter}.
+     */
+    public MyReporter build() throws IOException {
+      return new MyReporter(this);
+    }
+
+  }
+}
+```
+
+This pattern allows users to simply call
+```java
+MyReporter reporter = MyReporter.Factory.forContext(context).build();
+```
+to generate an instance of the reporter. Additionally, if you want to further extend MyReporter, following the exact same pattern except extending MyReporter instead of EventReporter will work correctly (which would not be true with standard Builder pattern).
+
+Metric Reporting
+================
+
+Developers should extend `RecursiveScheduledMetricReporter` and implement the method `RecursiveScheduledMetricReporter#report`. The base class will call report when appropriate with the list of metrics, separated by type, and tags that should be reported.
+
+Event Reporting
+===============
+
+Developers should extend `EventReporter` and implement the method `EventReporter#reportEventQueue(Queue<GobblinTrackingEvent>)`. The base class will call this method with a queue of all events to report as needed.
+
+Other Reporters
+===============
+
+It is also possible to implement a reporter without using the suggested classes. Reporters are recommended, but not required, to extend the interface `Reporter`. Reporters can use the public methods of `MetricContext` to navigate the Metric Context tree, query metrics, and register for notifications.
\ No newline at end of file
diff --git a/Job-Execution-History-Store.md b/Job-Execution-History-Store.md
new file mode 100644
index 0000000..0f11bf4
--- /dev/null
+++ b/Job-Execution-History-Store.md
@@ -0,0 +1,177 @@
+Table of Contents
+--------------------
+* [Overview] (#overview)
+* [Information Recorded] (#information-recorded)
+ * [Job Execution Information] (#job-execution-information)
+ * [Task Execution Information] (#task-execution-information)
+* [Default Implementation] (#default-implementation)
+* [Rest Query API] (#rest-query-api)
+* [Job Execution History Server] (#job-execution-history-server)
+
+Overview
+--------------------
+Gobblin provides the users a way of keeping tracking of executions of their jobs through the Job Execution History Store, which can be queried either directly if the implementation supports queries directly or through a Rest API. Note that using the Rest API needs the Job Execution History Server to be up and running. The Job Execution History Server will be discussed later. By default, writing to the Job Execution History Store is not enabled. To enable it, set configuration property `job.history.store.enabled` to `true`.
+
+Information Recorded
+--------------------------------
+The Job Execution History Store stores various pieces of information of a job execution, including both job-level and task-level stats and measurements that are summarized below.
+
+Job Execution Information
+-------------------------------------------------
+The following table summarizes job-level execution information the Job Execution History Store stores. 
+
+|Information| Description|
+|---------------------------------|----------------------|
+|Job name|Gobblin job name.|
+|Job ID|Gobblin job ID.|
+|Start time|Start time in epoch time (of unit milliseconds) of the job in the local time zone.|
+|End time|End time in epoch time (of unit milliseconds) of the job in the local time zone.|
+|Duration|Duration of the job in milliseconds.|
+|Job state|Running state of the job. Possible values are `PENDING`, `RUNNING`, `SUCCESSFUL`, `COMMITTED`, `FAILED`, `CANCELLED`.|
+|Launched tasks|Number of launched tasks of the job.|
+|Completed tasks|Number of tasks of the job that completed.|
+|Launcher type|The type of the launcher used to launch and run the task.|
+|Job tracking URL|This will be set to the MapReduce job URL if the Gobblin job is running on Hadoop MapReduce. This may also be set to the Azkaban job execution tracking URL if the job is running through Azkaban but not on Hadoop MapReduce. Otherwise, this will be empty.|
+|Job-level metrics|Values of job-level metrics. Note that this data is not time-series based so the values will be overwritten on every update.|
+|Job configuration properties|Job configuration properties used at runtime for job execution. Note that it may include changes made at runtime by the job.|
+
+Task Execution Information
+-------------------------------------------------
+The following table summarizes task-level execution information the Job Execution History Store stores. 
+
+|Information| Description|
+|---------------------------------|----------------------|
+|Task ID|Gobblin task ID.|
+|Job ID|Gobblin job ID.|
+|Start time|Start time in epoch time (of unit milliseconds) of the task in the local time zone.|
+|End time|End time in epoch time (of unit milliseconds) of the task in the local time zone.|
+|Duration|Duration of the task in milliseconds.|
+|Task state|Running state of the task. Possible values are `PENDING`, `RUNNING`, `SUCCESSFUL`, `COMMITTED`, `FAILED`, `CANCELLED`.|
+|Task failure exception|Exception message in case of task failure.|
+|Low watermark|The low watermark of the task if avaialble.|
+|High watermark|The high watermark of the task if available.|
+|Extract namespace|The namespace of the `Extract`. An `Extract` is a concept describing the ingestion work of a job. This stores the value specified through the configuration property `extract.namespace`.|
+|Extract name|The name of the `Extract`. This stores the value specified through the configuration property `extract.table.name`.|
+|Extract type|The type of the `Extract`. This stores the value specified through the configuration property `extract.table.type`.|
+|Task-level metrics|Values of task-level metrics. Note that this data is not time-series based so the values will be overwritten on every update.|
+|Task configuration properties|Task configuration properties used at runtime for task execution. Note that it may include changes made at runtime by the task.|
+
+
+Default Implementation
+--------------------------------
+The default implementation of the Job Execution History Store stores job execution information into a MySQL database in a few different tables. Specifically, the following tables are used and should be created before writing to the store is enabled. Checkout the MySQL [DDLs](https://github.com/linkedin/gobblin/blob/master/gobblin-metastore/src/main/resources/gobblin_job_history_store.sql) of the tables for detailed columns of each table.
+
+* Table `gobblin_job_executions` stores basic information about a job execution including the start and end times, job running state, number of launched and completed tasks, etc. 
+* Table `gobblin_task_executions` stores basic information on task executions of a job, including the start and end times, task running state, task failure message if any, etc, of each task. 
+* Table `gobblin_job_metrics` stores values of job-level metrics collected through the `JobMetrics` class. Note that this data is not time-series based and values of metrics are overwritten on every update to the job execution information. 
+* Table `gobblin_task_metrics` stores values of task-level metrics collected through the `TaskMetrics` class. Again, this data is not time-series based and values of metrics are overwritten on updates.
+* Table `gobblin_job_properties` stores the job configuration properties used at runtime for the job execution, which may include changes made at runtime by the job.
+* Table `gobblin_task_properties` stores the task configuration properties used at runtime for task executions, which also may include changes made at runtime by the tasks.
+
+To enable writing to the MySQL-backed Job Execution History Store, the following configuration properties (with sample values) need to be set:
+
+```properties
+job.history.store.url=jdbc:mysql://localhost/gobblin
+job.history.store.jdbc.driver=com.mysql.jdbc.Driver
+job.history.store.user=gobblin
+job.history.store.password=gobblin
+``` 
+
+
+Rest Query API
+--------------------------------
+
+The Job Execution History Store Rest API supports three types of queries: query by job name, query by job ID, or query by extract name. The query type can be specified using the field `idType` in the query json object and can have one of the values `JOB_NAME`, `JOB_ID`, or `TABLE`. All three query types require the field `id` in the query json object, which should have a proper value as documented in the following table. 
+
+|Query type|Query ID|
+|---------------------------------|----------------------|
+|JOB_NAME|Gobblin job name.|
+|JOB_ID|Gobblin job ID.|
+|TABLE|A json object following the `TABLE` schema shown below.|
+
+```json
+{
+    "type": "record",
+    "name": "Table",
+    "namespace": "gobblin.rest",
+    "doc": "Gobblin table definition",
+    "fields": [
+      {
+          "name": "namespace",
+          "type": "string",
+          "optional": true,
+          "doc": "Table namespace"
+      },
+      {
+          "name": "name",
+          "type": "string",
+          "doc": "Table name"
+      },
+      {
+          "name": "type",
+          "type": {
+              "name": "TableTypeEnum",
+              "type": "enum",
+              "symbols" : [ "SNAPSHOT_ONLY", "SNAPSHOT_APPEND", "APPEND_ONLY" ]
+          },
+          "optional": true,
+          "doc": "Table type"
+      }
+    ]
+}
+```
+
+For each query type, there are also some option fields that can be used to control the number of records returned and what should be included in the query result. The optional fields are summarized in the following table.
+
+|Optional field|Type|Description|
+|---------------------------------|----------------------|----------------------|
+|`limit`|`int`|Limit on the number of records returned.|
+|`timeRange`|`TimeRange`|The query time range. The schema of `TimeRange` is shown below.|
+|`jobProperties`|`boolean`|This controls whether the returned record should include the job configuration properties.|
+|`taskProperties`|`boolean`|This controls whether the returned record should include the task configuration properties.|
+
+```json
+{
+    "type": "record",
+    "name": "TimeRange",
+    "namespace": "gobblin.rest",
+    "doc": "Query time range",
+    "fields": [
+      {
+          "name": "startTime",
+          "type": "string",
+          "optional": true,
+          "doc": "Start time of the query range"
+      },
+      {
+          "name": "endTime",
+          "type": "string",
+          "optional": true,
+          "doc": "End time of the query range"
+      },
+      {
+          "name": "timeFormat",
+          "type": "string",
+          "doc": "Date/time format used to parse the start time and end time"
+      }
+    ]
+}
+```
+
+The API is built with [rest.li](http://www.rest.li), which generates documentation on compilation and can be found at `http://<hostname:port>/restli/docs`.
+
+### Example Queries
+*Fetch the 10 most recent job executions with a job name `TestJobName`*
+```bash
+curl "http://<hostname:port>/jobExecutions/idType=JOB_NAME&id.string=TestJobName&limit=10"
+```
+
+Job Execution History Server
+--------------------------------
+The Job Execution History Server is a Rest server for serving queries on the Job Execution History Store through the Rest API described above. The Rest endpoint URL is configurable through the following configuration properties (with their default values):
+```properties
+rest.server.host=localhost
+rest.server.port=8080
+```
+
+**Note:** This server is started in the standalone deployment if configuration property `job.execinfo.server.enabled` is set to `true`.
\ No newline at end of file
diff --git a/Kafka-HDFS-Ingestion.md b/Kafka-HDFS-Ingestion.md
new file mode 100644
index 0000000..50a9977
--- /dev/null
+++ b/Kafka-HDFS-Ingestion.md
@@ -0,0 +1,237 @@
+Table of Contents
+--------------------
+* [Getting Started](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#getting-started)
+ * [Standalone](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#standalone)
+ * [MapReduce](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#mapreduce)
+* [Setting up Kafka-HDFS Ingestion Jobs](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#setting-up-kafka-hdfs-ingestion-jobs)
+ * [Job Constructs](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#job-constructs)
+ * [Job Config Properties](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#job-config-properties)
+ * [Metrics And Events](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#metrics-and-events)
+ * [Merging and Grouping Workunits in `KafkaSource`](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#merging-and-grouping-workunits-in-kafkasource)
+   * [Single-Level Packing](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#single-level-packing)
+    * [Bi-Level Packing](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#bi-level-packing)
+    * [Average Record Size-Based Workunit Size Estimator](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#average-record-size-based-workunit-size-estimator)
+    * [Average Record Time-Based Workunit Size Estimator](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#average-record-time-based-workunit-size-estimator)
+
+# Getting Started
+
+This section helps you set up a quick-start job for ingesting Kafka topics on a single machine. We provide quick start examples in both standalone and MapReduce mode.
+
+## Standalone
+
+* Setup a single node Kafka broker by following the [Kafka quick start guide](http://kafka.apache.org/documentation.html#quickstart). Suppose your broker URI is `localhost:9092`, and you've created a topic "test" with two events "This is a message" and "This is a another message".
+
+* The remaining steps are the same as the [Wikipedia example](https://github.com/linkedin/gobblin/wiki/Getting%20Started), except using the following job config properties:
+
+```
+job.name=GobblinKafkaQuickStart
+job.group=GobblinKafka
+job.description=Gobblin quick start job for Kafka
+job.lock.enabled=false
+
+kafka.brokers=localhost:9092
+
+source.class=gobblin.source.extractor.extract.kafka.KafkaSimpleSource
+extract.namespace=gobblin.extract.kafka
+
+writer.builder.class=gobblin.writer.SimpleDataWriterBuilder
+writer.file.path.type=tablename
+writer.destination.type=HDFS
+writer.output.format=txt
+
+data.publisher.type=gobblin.publisher.BaseDataPublisher
+
+mr.job.max.mappers=1
+
+metrics.reporting.file.enabled=true
+metrics.log.dir=${env:GOBBLIN_WORK_DIR}/metrics
+metrics.reporting.file.suffix=txt
+
+bootstrap.with.offset=earliest
+```
+
+After the job finishes, the following messages should be in the job log:
+
+```
+INFO Pulling topic test
+INFO Pulling partition test:0 from offset 0 to 2, range=2
+INFO Finished pulling partition test:0
+INFO Finished pulling topic test
+INFO Extracted 2 data records
+INFO Actual high watermark for partition test:0=2, expected=2
+INFO Task <task_id> completed in 31212ms with state SUCCESSFUL
+```
+
+The output file will be in `GOBBLIN_WORK_DIR/job-output/test`, with the two messages you've just created in the Kafka broker. `GOBBLIN_WORK_DIR/metrics` will contain metrics collected from this run.
+
+## MapReduce
+
+* Setup a single node Kafka broker same as in standalone mode.
+* Setup a single node Hadoop cluster by following the steps in [Hadoop: Setting up a Single Node Cluster](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html). Suppose your HDFS URI is `hdfs://localhost:9000`.
+* Create a job config file with the following properties:
+
+```
+job.name=GobblinKafkaQuickStart
+job.group=GobblinKafka
+job.description=Gobblin quick start job for Kafka
+job.lock.enabled=false
+
+kafka.brokers=localhost:9092
+
+source.class=gobblin.source.extractor.extract.kafka.KafkaSimpleSource
+extract.namespace=gobblin.extract.kafka
+
+writer.builder.class=gobblin.writer.SimpleDataWriterBuilder
+writer.file.path.type=tablename
+writer.destination.type=HDFS
+writer.output.format=txt
+
+data.publisher.type=gobblin.publisher.BaseDataPublisher
+
+mr.job.max.mappers=1
+
+metrics.reporting.file.enabled=true
+metrics.log.dir=/gobblin-kafka/metrics
+metrics.reporting.file.suffix=txt
+
+bootstrap.with.offset=earliest
+
+fs.uri=hdfs://localhost:9000
+writer.fs.uri=hdfs://localhost:9000
+state.store.fs.uri=hdfs://localhost:9000
+
+mr.job.root.dir=/gobblin-kafka/working
+state.store.dir=/gobblin-kafka/state-store
+task.data.root.dir=/jobs/kafkaetl/gobblin/gobblin-kafka/task-data
+data.publisher.final.dir=/gobblintest/job-output
+```
+
+* Run `gobblin-mapreduce.sh`:
+
+`gobblin-mapreduce.sh --conf <path-to-job-config-file>`
+
+After the job finishes, the job output file will be in `/gobblintest/job-output/test` in HDFS, and the metrics will be in `/gobblin-kafka/metrics`.
+
+
+# Setting up Kafka-HDFS Ingestion Jobs
+## Job Constructs
+**Source and Extractor**
+
+Gobblin provides two abstract classes, [`KafkaSource`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaSource.java) and [`KafkaExtractor`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaExtractor.java). `KafkaSource` creates a workunit for each Kafka topic partition to be pulled, then merges and groups the workunits based on the desired number of workunits specified by property `mr.job.max.mappers` (this property is used in both standalone and MR mode). More details about how workunits are merged and grouped is available [here](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion#merging-and-grouping-workunits-in-kafkasource). `KafkaExtractor` extracts the partitions assigned to a workunit, based on the specified low watermark and high watermark.
+
+To use them in a Kafka-HDFS ingestion job, one should subclass `KafkaExtractor` and implement method `decodeRecord(MessageAndOffset)`, which takes a `MessageAndOffset` object pulled from the Kafka broker and decodes it into a desired object. One should also subclass `KafkaSource` and implement `getExtractor(WorkUnitState)` which should return an instance of the Extractor class.
+
+Gobblin currently provides two concrete implementations: [`KafkaSimpleSource`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaSimpleSource.java)/[`KafkaSimpleExtractor`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaSimpleExtractor.java), and [`KafkaAvroSource`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaAvroSource.java)/[`KafkaAvroExtractor`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/KafkaExtractor.java). 
+
+`KafkaSimpleExtractor` simply returns the payload of the `MessageAndOffset` object as a byte array. A job that uses `KafkaSimpleExtractor` may use a `Converter` to convert the byte array to whatever format desired. For example, if the desired output format is JSON, one may implement an `ByteArrayToJsonConverter` to convert the byte array to JSON. Alternatively one may implement a `KafkaJsonExtractor`, which extends `KafkaExtractor` and convert the `MessageAndOffset` object into a JSON object in the `decodeRecord` method. Both approaches should work equally well.
+
+`KafkaAvroExtractor` decodes the payload of the `MessageAndOffset` object into an Avro [`GenericRecord`](http://avro.apache.org/docs/current/api/java/index.html?org/apache/avro/generic/GenericRecord.html) object. It requires that the byte 0 of the payload be 0, bytes 1-16 of the payload be a 16-byte schema ID, and the remaining bytes be the encoded Avro record. It also requires the existence of a schema registry that returns the Avro schema given the schema ID, which is used to decode the byte array. Thus this class is mainly applicable to LinkedIn's internal Kafka clusters.
+
+**Writer and Publisher**
+
+Any desired writer and publisher can be used, e.g., one may use the [`AvroHdfsDataWriter`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/writer/AvroHdfsDataWriter.java) and the [`BaseDataPublisher`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/publisher/BaseDataPublisher.java), similar as the [Wikipedia example job](https://github.com/linkedin/gobblin/wiki/Getting%20Started). If plain text output file is desired, one may use [`SimpleDataWriter`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/writer/SimpleDataWriter.java).
+
+## Job Config Properties
+
+These are some of the job config properties used by `KafkaSource` and `KafkaExtractor`.
+
+| Property Name | Semantics     |
+| ------------- |-------------| 
+| `topic.whitelist` (regex)      | Kafka topics to be pulled. Default value = .* | 
+| `topic.blacklist` (regex)     | Kafka topics not to be pulled. Default value = empty | 
+| `kafka.brokers` | Comma separated Kafka brokers to ingest data from.      |  
+| `mr.job.max.mappers` | Number of tasks to launch. In MR mode, this will be the number of mappers launched. If the number of topic partitions to be pulled is larger than the number of tasks, `KafkaSource` will assign partitions to tasks in a balanced manner.      |  
+| `bootstrap.with.offset` | For new topics / partitions, this property controls whether they start at the earliest offset or the latest offset. Possible values: earliest, latest, skip. Default: latest      |
+| `reset.on.offset.out.of.range` | This property controls what to do if a partition's previously persisted offset is out of the range of the currently available offsets. Possible values: earliest (always move to earliest available offset), latest (always move to latest available offset), nearest (move to earliest if the previously persisted offset is smaller than the earliest offset, otherwise move to latest), skip (skip this partition). Default: nearest |
+| `topics.move.to.latest.offset` (no regex) | Topics in this list will always start from the latest offset (i.e., no records will be pulled). To move all topics to the latest offset, use "all". This property should rarely, if ever, be used.
+
+It is also possible to set a time limit for each task. For example, to set the time limit to 15 minutes, set the following properties:
+
+```
+extract.limit.enabled=true
+extract.limit.type=time #(other possible values: rate, count, pool)
+extract.limit.time.limit=15
+extract.limit.time.limit.timeunit=minutes 
+```
+## Metrics and Events
+
+**Task Level Metrics**
+
+Task level metrics can be created in `Extractor`, `Converter` and `Writer` by extending [`InstrumentedExtractor`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/extractor/InstrumentedExtractor.java), [`InstrumentedConverter`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/converter/InstrumentedConverter.java) and [`InstrumentedDataWriter`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/writer/InstrumentedDataWriter.java).
+
+For example, `KafkaExtractor` extends `InstrumentedExtractor`. So you can do the following in subclasses of `KafkaExtractor`:
+
+```
+Counter decodingErrorCounter = this.getMetricContext().counter("num.of.decoding.errors");
+decodingErrorCounter.inc();
+```
+
+Besides Counter, Meter and Histogram are also supported.
+
+**Task Level Events**
+
+Task level events can be submitted by creating an [`EventSubmitter`](https://github.com/linkedin/gobblin/blob/master/gobblin-metrics/src/main/java/gobblin/metrics/event/EventSubmitter.java) instance and using `EventSubmitter.submit()` or `EventSubmitter.getTimingEvent()`.
+
+**Job Level Metrics**
+
+To create job level metrics, one may extend [`AbstractJobLauncher`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/AbstractJobLauncher.java) and create metrics there. For example:
+
+```
+Optional<JobMetrics> jobMetrics = this.jobContext.getJobMetricsOptional();
+if (!jobMetrics.isPresent()) {
+  LOG.warn("job metrics is absent");
+  return;
+}
+Counter recordsWrittenCounter = jobMetrics.get().getCounter("job.records.written");
+recordsWrittenCounter.inc(value);
+```
+
+Job level metrics are often aggregations of task level metrics, such as the `job.records.written` counter above. Since `AbstractJobLauncher` doesn't have access to task-level metrics, one should set these counters in `TaskState`s, and override `AbstractJobLauncher.postProcessTaskStates()` to aggregate them. For example, in `AvroHdfsTimePartitionedWriter.close()`, property `writer.records.written` is set for the `TaskState`. 
+
+**Job Level Events**
+
+Job level events can be created by extending `AbstractJobLauncher` and use `this.eventSubmitter.submit()` or `this.eventSubmitter.getTimingEvent()`.
+
+For more details about metrics, events and reporting them, please see Gobblin Metrics section.
+
+## Merging and Grouping Workunits in `KafkaSource`
+For each topic partition that should be ingested, `KafkaSource` first retrieves the last offset pulled by the previous run, which should be the first offset of the current run. It also retrieves the earliest and latest offsets currently available from the Kafka cluster and verifies that the first offset is between the earliest and the latest offsets. The latest offset is the last offset to be pulled by the current workunit. Since new records may be constantly published to Kafka and old records are deleted based on retention policies, the earliest and latest offsets of a partition may change constantly.
+
+For each partition, after the first and last offsets are determined, a workunit is created. If the number of Kafka partitions exceeds the desired number of workunits specified by property `mr.job.max.mappers`, `KafkaSource` will merge and group them into `n` [`MultiWorkUnit`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/workunit/MultiWorkUnit.java)s where `n=mr.job.max.mappers`. This is done using [`KafkaWorkUnitPacker`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaWorkUnitPacker.java), which has two implementations: [`KafkaSingleLevelWorkUnitPacker`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaSingleLevelWorkUnitPacker.java) and [`KafkaBiLevelWorkUnitPacker`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaBiLevelWorkUnitPacker.java). The packer packs workunits based on the estimated size of each workunit, which is obtained from [`KafkaWorkUnitSizeEstimator`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaWorkUnitSizeEstimator.java), which also has two implementations, [`KafkaAvgRecordSizeBasedWorkUnitSizeEstimator`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaAvgRecordSizeBasedWorkUnitSizeEstimator.java) and [`KafkaAvgRecordTimeBasedWorkUnitSizeEstimator`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/workunit/packer/KafkaAvgRecordTimeBasedWorkUnitSizeEstimator.java).
+
+### Single-Level Packing
+
+The single-level packer uses a worst-fit-decreasing approach for assigning workunits to mappers: each workunit goes to the mapper that currently has the lightest load. This approach balances the mappers well. However, multiple partitions of the same topic are usually assigned to different mappers. This may cause two issues: (1) many small output files: if multiple partitions of a topic are assigned to different mappers, they cannot share output files. (2) task overhead: when multiple partitions of a topic are assigned to different mappers, a task is created for each partition, which may lead to a large number of tasks and large overhead.
+
+### Bi-Level Packing
+
+The bi-level packer packs workunits in two steps.
+
+In the first step, all workunits are grouped into approximately `3n` groups, each of which contains partitions of the same topic. The max group size is set as
+
+`maxGroupSize = totalWorkunitSize/3n`
+
+The best-fit-decreasing algorithm is run on all partitions of each topic. If an individual workunit’s size exceeds `maxGroupSize`, it is put in a separate group. For each group, a new workunit is created which will be responsible for extracting all partitions in the group.
+
+The reason behind `3n` is that if this number is too small (i.e., too close to `n`), it is difficult for the second level to pack these groups into n balanced multiworkunits; if this number is too big, `avgGroupSize` will be small which doesn’t help grouping partitions of the same topic together. `3n` is a number that is empirically selected.
+
+The second step uses the same worst-fit-decreasing method as the first-level packer.
+
+This approach reduces the number of small files and the number of tasks, but it may have more mapper skew for two reasons: (1) in the worst-fit-decreasing approach, the less number of items to be packed, the more skew there will be; (2) when multiple partitions of a topic are assigned to the same mapper, if we underestimate the size of this topic, this mapper may take a much longer time than other mappers and the entire MR job has to wait for this mapper. This, however, can be mitigated by setting a time limit for each task, as explained above.
+
+### Average Record Size-Based Workunit Size Estimator
+
+This size estimator uses the average record size of each partition to estimate the sizes of workunits. When using this size estimator, each job run will record the average record size of each partition it pulled. In the next run, for each partition the average record size pulled in the previous run is considered the average record size
+to be pulled in this run.
+
+If a partition was not pulled in a run, a default value of 1024 will be used in the next run.
+
+### Average Record Time-Based Workunit Size Estimator
+
+This size estimator uses the average time to pull a record in each run to estimate the sizes of the workunits in the next run.
+
+When using this size estimator, each job run will record the average time per record of each partition. In the next run, the estimated average time per record for each topic is the geometric mean of the avg time per record of all partitions. For example if a topic has two partitions whose average time per record in the previous run are 2 and 8, the next run will use 4 as the estimated average time per record.
+
+If a topic is not pulled in a run, its estimated average time per record is the geometric mean of the estimated average time per record of all topics that are pulled in this run. If no topic was pulled in this run, a default value of 1.0 is used.
+
+The time-based estimator is more accurate than the size-based estimator when the time to pull a record is not proportional to the size of the record. However, the time-based estimator may lose accuracy when there are fluctuations in the Hadoop cluster which causes the average time for a partition to vary between different runs.
\ No newline at end of file
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index d645695..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,202 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
-
-   APPENDIX: How to apply the Apache License to your work.
-
-      To apply the Apache License to your work, attach the following
-      boilerplate notice, with the fields enclosed by brackets "[]"
-      replaced with your own identifying information. (Don't include
-      the brackets!)  The text should be enclosed in the appropriate
-      comment syntax for the file format. We also recommend that a
-      file or class name and description of purpose be included on the
-      same "printed page" as the copyright notice for easier
-      identification within third-party archives.
-
-   Copyright [yyyy] [name of copyright owner]
-
-   Licensed under the Apache License, Version 2.0 (the "License");
-   you may not use this file except in compliance with the License.
-   You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
diff --git a/Metrics-for-Gobblin-ETL.md b/Metrics-for-Gobblin-ETL.md
new file mode 100644
index 0000000..86b4dbe
--- /dev/null
+++ b/Metrics-for-Gobblin-ETL.md
@@ -0,0 +1,144 @@
+Gobblin ETL comes equipped with instrumentation using [Gobblin Metrics](Gobblin Metrics), as well as end points to easily extend this instrumentation.
+
+Configuring Metrics and Event emission
+======================================
+
+The following configurations are used for metrics and event emission:
+
+|Configuration Key                | Definition           | Default        |
+|---------------------------------|----------------------|----------------|
+|metrics.enabled                  | Whether metrics are enabled. If false, will not report metrics. | true |
+|metrics.report.interval          | Metrics report interval in milliseconds.    | 30000 |
+|metrics.reporting.file.enabled   | Whether metrics will be reported to a file. | false |
+|metrics.log.dir                  | If file enabled, the directory where metrics will be written. If missing, will not report to file. | N/A |
+|metrics.reporting.kafka.enabled  | Whether metrics will be reported to Kafka. | false |
+|metrics.reporting.kafka.brokers  | Kafka brokers for Kafka metrics emission.  | N/A   |
+|metrics.reporting.kafka.topic.metrics | Kafka topic where metrics (but not events) will be reported. | N/A   |
+|metrics.reporting.kafka.topic.events  | Kafka topic where events (but not metrics) will be reported. | N/A   |
+|metrics.reporting.kafka.format   | Format of metrics / events emitted to Kafka. (Options: json, avro) | json |
+|metrics.reporting.kafka.avro.use.schema.registry | Whether to use a schema registry for Kafka emitting. | false |
+|kafka.schema.registry.url        | If using schema registry, the url of the schema registry. | N/A   |
+|metrics.reporting.jmx.enabled    | Whether to report metrics to JMX.      | false  |
+|metrics.reporting.custom.builders | Comma-separated list of classes for custom metrics reporters. (See [Custom Reporters](https://github.com/linkedin/gobblin/wiki/Metrics-for-Gobblin-ETL#custom-reporters)) |    |
+
+ 
+Operational Metrics
+===================
+
+Each construct in a Gobblin ETL run computes metrics regarding it's performance / progress. Each metric is tagged by default with the following tags:
+* jobName: Gobblin generated name for the job.
+* jobId: Gobblin generated id for the job.
+* clusterIdentifier: string identifier the cluster / host where the job was run. Obtained from resource manager, job tracker, or the name of the host.
+* taskId: Gobblin generated id for the task that generated the metric.
+* construct: construct type that generated the metric (e.g. extractor, converter, etc.)
+* class: specific class of the construct that generated the metric.
+* finalMetricReport: metrics are emitted regularly. Sometimes it is useful to select only the last report from each context. To aid with this, some reporters will add this tag with value "true" only to the final report from a metric context.
+
+This is the list of operational metrics implemented by default, grouped by construct.
+
+Extractor Metrics
+-----------------
+* gobblin.extractor.records.read: meter for records read.
+* gobblin.extractor.records.failed: meter for records failed to read.
+* gobblin.extractor.extract.time: timer for reading of records.
+
+Converter Metrics
+-----------------
+* gobblin.converter.records.in: meter for records going into the converter.
+* gobblin.converter.records.out: meter for records outputted by the converter.
+* gobblin.converter.records.failed: meter for records that failed to be converted.
+* gobblin.converter.convert.time: timer for conversion time of each record.
+
+Fork Operator Metrics
+---------------------
+* gobblin.fork.operator.records.in: meter for records going into the fork operator.
+* gobblin.fork.operator.forks.out: meter for records going out of the fork operator (each record is counted once for each fork it is emitted to).
+* gobblin.fork.operator.fork.time: timer for forking of each record.
+
+Row Level Policy Metrics
+------------------------
+* gobblin.qualitychecker.records.in: meter for records going into the row level policy.
+* gobblin.qualitychecker.records.passed: meter for records passing the row level policy check.
+* gobblin.qualitychecker.records.failed: meter for records failing the row level policy check.
+* gobblin.qualitychecker.check.time: timer for row level policy checking of each record.
+
+Data Writer Metrics
+-------------------
+* gobblin.writer.records.in: meter for records requested to be written.
+* gobblin.writer.records.written: meter for records actually written.
+* gobblin.writer.records.failed: meter for records failed to be written.
+* gobblin.writer.write.time: timer for writing each record.
+
+Runtime Events
+==============
+
+The Gobblin ETL runtime emits events marking its progress. All events have the following metadata:
+* jobName: Gobblin generated name for the job.
+* jobId: Gobblin generated id for the job.
+* clusterIdentifier: string identifier the cluster / host where the job was run. Obtained from resource manager, job tracker, or the name of the host.
+* taskId: Gobblin generated id for the task that generated the metric (if applicable).
+
+This is the list of events that are emitted by the Gobblin runtime:
+
+Job Progression Events
+----------------------
+
+* LockInUse: emitted if a job fails because it fails to get a lock.
+* WorkUnitsMissing: emitted if a job exits because source failed to get work units.
+* WorkUnitsEmpty: emitted if a job exits because there were no work units to process.
+* TasksSubmitted: emitted when tasks are submitted for execution. Metadata: tasksCount(number of tasks submitted).
+* TaskFailed: emitted when a task fails. Metadata: taskId(id of the failed task).
+* Job_Successful: emitted at the end of a successful job.
+* Job_Failed: emitted at the end of a failed job.
+
+Job Timing Events
+-----------------
+These events give information on timing on certain parts of the execution. Each timing event contains the following metadata:
+* startTime: timestamp when the timed processing started.
+* endTime: timestamp when the timed processing finished.
+* durationMillis: duration in milliseconds of the timed processing.
+* eventType: always "timingEvent" for timing events.
+
+The following timing events are emitted:
+* FullJobExecutionTimer: times the entire job execution.
+* WorkUnitsCreationTimer: times the creation of work units.
+* WorkUnitsPreparationTime: times the preparation of work units.
+* JobRunTimer: times the actual running of job (i.e. processing of all work units).
+* JobCommitTimer: times the committing of work units.
+* JobCleanupTimer: times the job cleanup.
+* JobLocalSetupTimer: times the setup of a local job.
+* JobMrStagingDataCleanTimer: times the deletion of staging directories from previous work units (MR mode).
+* JobMrDistributedCacheSetupTimer: times the setting up of distributed cache (MR mode).
+* JobMrSetupTimer: times the setup of the MR job (MR mode).
+* JobMrRunTimer: times the execution of the MR job (MR mode).
+
+Customizing Instrumentation
+===========================
+
+Custom constructs
+-----------------
+When using a custom construct (for example a custom extractor for your data source), you will get the above mentioned instrumentation for free. However, you may want to implement additional metrics. To aid with this, instead of extending the usual class Extractor, you can extend the class `gobblin.instrumented.extractor.InstrumentedExtractor`. Similarly, for each construct there is an instrumented version that allows extension of the default metrics ([InstrumentedExtractor](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/extractor/InstrumentedExtractor.java), [InstrumentedConverter](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/converter/InstrumentedConverter.java), [InstrumentedForkOperator](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/fork/InstrumentedForkOperator.java), [InstrumentedRowLevelPolicy](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/qualitychecker/InstrumentedRowLevelPolicy.java), and [InstrumentedDataWriter](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/writer/InstrumentedDataWriter.java)).
+
+All of the instrumented constructs have Javadoc providing with additional information. In general, when extending an instrumented construct, you will have to implement a different method. For example, when extending an InstrumentedExtractor, instead of implementing `readRecord`, you will implement `readRecordImpl`. To make this clearer for the user, implementing `readRecord` will throw a compilation error, and the javadoc of each method specifies the method that should be implemented.
+
+### Instrumentable Interface
+
+Instrumented constructs extend the interface [Instrumentable](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/instrumented/Instrumentable.java). It contains the following methods:
+* `getMetricContext()`: get the default metric context generated for that instance of the construct, with all the appropriate tags. Use this metric context to create any additional metrics.
+* `isInstrumentationEnabled()`: returns true if instrumentation is enabled.
+* `switchMetricsContext(List<Tag<?>>)`: switches the default metric context returned by `getMetricContext()` to a metric context containing the supplied tags. All default metrics will be reported to the new metric context. This method is useful when the state of a construct changes during the execution, and the user desires to reflect that in the emitted tags (for example, Kafka extractor can handle multiple topics in the same extractor, and we want to reflect this in the metrics).
+* `switchMetricContext(MetricContext)`: similar to the above method, but uses the supplied metric context instead of generating a new metric context. It is the responsibility of the caller to ensure the new metric context has the correct tags and parent.
+
+The following method can be re-implemented by the user:
+* `generateTags(State)`: this method should return a list of tags to use for metric contexts created for this construct. If overriding this method, it is always a good idea to call `super()` and only append tags to this list.
+
+### Callback Methods
+
+Instrumented constructs have a set of callback methods that are called at different points in the processing of each record, and which can be used to update metrics. For example, the `InstrumentedExtractor` has the callbacks `beforeRead()`, `afterRead(D, long)`, and `onException(Exception)`. The javadoc for the instrumented constructs has further descriptions for each callback. Users should always call `super()` when overriding this callbacks, as default metrics depend on that.
+
+Custom Reporters
+----------------
+
+Besides the reporters implemented by default (file, Kafka, and JMX), users can add custom reporters to the classpath and instruct Gobblin to use these reporters. To do this, users should extend the interface [CustomReporterFactory](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/metrics/CustomReporterFactory.java), and specify a comma-separated list of CustomReporterFactory classes in the configuration key `metrics.reporting.custom.builders`.
+
+Gobblin will automatically search for these CustomReporterFactory implementations, instantiate each one with a parameter-less constructor, and then call the method `newScheduledReporter(MetricContext, Properties)`, where the properties contain all of the input configurations supplied to Gobblin. Gobblin will then manage this `ScheduledReporter`.
\ No newline at end of file
diff --git a/Monitoring-Design.md b/Monitoring-Design.md
new file mode 100644
index 0000000..2a89b08
--- /dev/null
+++ b/Monitoring-Design.md
@@ -0,0 +1,4 @@
+Metrics Collection Basics
+-----------------
+
+Please refer to [Gobblin Metrics Architecture](https://github.com/linkedin/gobblin/wiki/Gobblin%20Metrics%20Architecture) section.
\ No newline at end of file
diff --git a/Monitoring.md b/Monitoring.md
new file mode 100644
index 0000000..5630d06
--- /dev/null
+++ b/Monitoring.md
@@ -0,0 +1,77 @@
+Overview
+--------------------
+As a framework for ingesting potentially huge volume of data from many different sources, it's critical to monitor the health and status of the system and job executions. Gobblin employs a variety of approaches introduced below for this purpose. All the approaches are optional and can be configured to be turned on and off in different combinations through the framework and job configurations. 
+
+Metrics Collecting and Reporting
+--------------------
+
+## Metrics Reporting
+
+Out-of-the-box, Gobblin reports metrics though:
+
+* _JMX_ : used in the standalone deployment. Metrics reported to JMX can be checked using using tools such as [VisualVM](http://visualvm.java.net/) or JConsole. 
+* _Metric log files_: Files are stored in a root directory defined by the property `metrics.log.dir`. Each Gobblin job has its own subdirectory under the root directory and each run of the job has its own metric log file named after the job ID as `${job_id}.metrics.log`.
+* _Hadoop counters_ : used for M/R deployments. Gobblin-specific metrics are reported in the "JOB" or "TASK" groups for job- and task- level metrics. By default, task-level metrics are not reported through Hadoop counters as doing so may cause the number of Hadoop counters to go beyond the system-wide limit. However, users can choose to turn on reporting task-level metrics as Hadoop counters by setting `mr.include.task.counters=true`. 
+
+
+## Metrics collection
+### JVM Metrics
+The standalone deployment of Gobblin runs in a single JVM so it's important to monitor the health of the JVM, through a set of pre-defined JVM metrics in the following four categories. 
+
+* `jvm.gc`: this covers metrics related to garbage collection, e.g., counts and time spent on garbage collection.
+* `jvm.memory`: this covers metrics related to memory usage, e.g., detailed heap usage.  
+* `jvm.threads`: this covers metrics related to thread states, e.g., thread count and thread deadlocks.
+* `jvm.fileDescriptorRatio`: this measures the ratio of open file descriptors.
+
+All JVM metrics are reported via JMX and can be checked using tools such as [VisualVM](http://visualvm.java.net/) or JConsole. 
+
+### Pre-defined Job Execution Metrics
+Internally, Gobblin pre-defines a minimum set of metrics listed below in two metric groups: `JOB` and `TASK` for job-level metrics and task-level metrics, respectively. Those metrics are useful in keeping track of the progress and performance of job executions.
+
+* `${metric_group}.${id}.records`: this metric keeps track of the total number of data records extracted by the job or task depending on the `${metric_group}`. The `${id}` is either a job ID or a task ID depending on the `${metric_group}`. 
+* `${metric_group}.${id}.recordsPerSecond`: this metric keeps track of the rate of data extraction as data records extracted per second by the job or task depending on the `${metric_group}`.
+* `${metric_group}.${id}.bytes`: this metric keeps track of the total number of bytes extracted by the job or task depending on the `${metric_group}`.
+* `${metric_group}.${id}.bytesPerSecond`: this metric keeps track of the rate of data extraction as bytes extracted per second by the job or task depending on the `${metric_group}`.
+
+Among the above metrics, `${metric_group}.${id}.records` and `${metric_group}.${id}.bytes` are reported as Hadoop MapReduce counters for Gobblin jobs running on Hadoop.
+
+Job Execution History Store
+--------------------
+Gobblin also supports writing job execution information to a job execution history store backed by a database of choice. Gobblin uses MySQL by default and it ships with the SQL [DDLs](https://github.com/linkedin/gobblin/wiki/files/gobblin_job_history_store_ddlwq.sql) of the relevant MySQL tables, although  it still allows users to choose which database to use as long as the schema of the tables is compatible. Users can use the properties `job.history.store.url` and `job.history.store.jdbc.driver` to specify the database URL and the JDBC driver to work with the database of choice. The user name and password used to access the database can be specified using the properties `job.history.store.user` and `job.history.store.password`. An example configuration is shown below:
+
+```
+job.history.store.url=jdbc:mysql://localhost/gobblin
+job.history.store.jdbc.driver=com.mysql.jdbc.Driver
+job.history.store.user=gobblin
+job.history.store.password=gobblin
+``` 
+
+Email Notifications 
+--------------------
+In addition to writing job execution information to the job execution history store, Gobblin also supports sending email notifications about job status. Job status notifications fall into two categories: alerts in case of job failures and normal notifications in case of successful job completions. Users can choose to enable or disable both categories using the properties `email.alert.enabled` and `email.notification.enabled`. 
+
+The main content of an email alert or notification is a job status report in Json format. Below is an example job status report:
+
+```
+{
+	"job name": "Gobblin_Demo_Job",
+	"job id": "job_Gobblin_Demo_Job_1417487480842",
+	"job state": "COMMITTED",
+	"start time": 1417487480874,
+	"end time": 1417490858913,
+	"duration": 3378039,
+	"tasks": 1,
+	"completed tasks": 1,
+	"task states": [
+		{
+			"task id": "task_Gobblin_Demo_Job_1417487480842_0",
+			"task state": "COMMITTED",
+			"start time": 1417490795903,
+			"end time": 1417490858908,
+			"duration": 63005,
+			"high watermark": -1,
+			"exception": ""
+		}
+	]
+}
+``` 
\ No newline at end of file
diff --git a/NOTICE b/NOTICE
deleted file mode 100644
index ef43b02..0000000
--- a/NOTICE
+++ /dev/null
@@ -1,69 +0,0 @@
-(c) 2014 LinkedIn Corp. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-this file except in compliance with the License. You may obtain a copy of the
-License at  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software distributed
-under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-CONDITIONS OF ANY KIND, either express or implied.
-
-
-This product includes software developed by The Apache Software
-Foundation (http://www.apache.org/).
-
-This product includes/uses Codehale Metrics (https://github.com/dropwizard/metrics)
-Copyright (C) 2010 Code Hale, Yammer.com
-License: Apache 2.0
-
-This product includes/uses Gson (https://code.google.com/p/google-gson/)
-Copyright (C) 2006 Google Inc.
-License: Apache 2.0
-
-This product includes/uses Guava (http://code.google.com/p/guava-libraries/)
-Copyright (C) 2006 Google Inc.
-License: Apache 2.0
-
-This product includes/uses InfluxDB (https://github.com/influxdb/)
-Copyright (c) 2013 Errplane Inc.
-License: MIT
-
-This product includes/uses Jackson (http://jackson.codehaus.org/)
-Copyright (c) 2007- Tatu Saloranta, tatu.saloranta@iki.fi
-License: Apache 2.0
-
-This product includes/uses Jcraft (www.jcraft.com)
-Copyright (c) 2002 Atsuhiko Yamanaka, JCraft, Inc.
-License: BSD
-
-This product includes/uses Joda (http://joda-time.sourceforge.net/)
-Copyright 2001-2006 Stephen Colebourne
-License: Apache 2.0
-
-This product includes/uses Quartz Scheduler (http://quartz-scheduler.org/)
-Copyright (c) 2008 Terracotta, Inc.
-License: Apache 2.0
-
-This product includes/uses SLF4J (http://slf4j.org)
-Copyright (c) 2004 QOS.ch
-License: MIT
-
-This product includes/uses TestNG (http://testng.org/)
-Copyright (c) 2004 Cedric Beust
-License: Apache 2.0
-
-This product includes/uses Mockito (http://mockito.org/)
-Copyright (c) 2007 Mockito contributors
-License: MIT
-
-This product includes/uses DataNucleus (http://datanucleus.org/)
-Copyright (c) 2004 DataNucleus
-License: Apache 2.0
-
-This product includes/uses Force.com Web Service Connector (https://github.com/forcedotcom/wsc/blob/master/LICENSE.md)
-Force.com Web Service Connector (WSC) is Copyright (c) 2005-2013, salesforce.com, inc. All rights reserved.
-
-This product includes/uses Xml Pull Parser (http://www.extreme.indiana.edu/license.txt)
-Indiana University Extreme! Lab Software License, Version 1.2
-Copyright (C) 2004 The Trustees of Indiana University.
-All rights reserved.
diff --git a/gobblin-docs/project/News.md b/News.md
similarity index 100%
rename from gobblin-docs/project/News.md
rename to News.md
diff --git a/Partitioned-Writers.md b/Partitioned-Writers.md
new file mode 100644
index 0000000..03c7642
--- /dev/null
+++ b/Partitioned-Writers.md
@@ -0,0 +1,69 @@
+Gobblin allows partitioning output data using a writer partitioner. This allows, for example, to write timestamped records to a different file depending on the timestamp of the record.
+
+To partition output records, two things are needed:
+* Set `writer.builder.class` to a class that implements `PartitionAwareDataWriterBuilder`.
+* Set `writer.partitioner.class` to the class of the desired partitioner, which must be subclass of `WriterPartitioner`. The partitioner will get all Gobblin configuration options, so some partitioners may require additional configurations.
+
+If `writer.partitioner.class` is set but `writer.builder.class` is not a `PartitionAwareDataWriterBuilder`, Gobblin will throw an error. If `writer.builder.class` is a `PartitionAwareDataWriterBuilder`, but no partitioner is set, Gobblin will attempt to still create the writer with no partition, however, the writer may not support unpartitioned data, in which case it will throw an error.
+
+`WriterPartitioner`s compute a partition key for each record. Some `PartitionAwareDataWriterBuilder` are unable to handle certain partition keys (for example, a writer that can only partition by date would expect a partition schema that only contains date information). If the writer cannot handle the partitioner key, Gobblin will throw an error. The Javadoc of partitioners should always include the schema it emits and the writer Javadoc should contain which schemas it accepts for ease of use.
+
+Existing Partition Aware Writers
+--------------------------------
+* `gobblin.writer.AvroDataWriterBuilder`: If partition is present, creates directory structure based on partition. For example, if partition is `{name="foo", type="bar"}`, the record will be written to a file in directory `/path/to/data/name=foo/type=bar/file.avro`.  
+
+Existing Partitioners
+---------------------
+* `gobblin.example.wikipedia.WikipediaPartitioner`: Sample partitioner for the Wikipedia example. Partitions record by article title.
+
+Design
+------
+![Partitioned Writer Logic](https://raw.githubusercontent.com/wiki/linkedin/gobblin/images/Gobblin-Partitioned-Writer.png)
+
+Gobblin always instantiates a `PartitionedDataWriter` for each fork. On construction, the partitioned writer:
+ 1. checks whether a partitioner is present in the configuration. If no partitioner is present, then the instance of `PartitionedDataWriter` is simply a thin wrapper around a normal writer. 
+ 2. If a partitioner is present, the partitioned writer will check if the class configured at `writer.builder.class` is an instance of `PartitionAwareDataWriterBuilder`, throwing an error in case this is not true.  
+ 3. The partitioned writer instantiate the partitioner, runs `partitionSchema()`, and then checks whether the partition aware writer builder accepts that schema using `validatePartitionSchema`. If this returns false, Gobblin will throw an error.
+
+Every time the partitioned writer gets a record, it uses the partitioner to get a partition key for that record. The partitioned writer keeps an internal map from partition key to instances of writers for each partition. If a writer is already created for this key, it will call write on that writer for the new record. If the writer is not present, the partitioned writer will instantiate a new writer with the computed partition, and then pass in the record.
+
+`WriterPartitioner` partitions records by returning a partition key for each record, which is of type `GenericRecord`. Each `WriterPartitioner` emits keys with a particular `Schema` which is available by using the method `WriterPartitioner#partitionSchema()`. Implementations of `PartitionAwareDataWriterBuilder` must check the partition schema to decide if they can understand and correctly handle that schema when the method `PartitionAwareDataWriterBuilder#validateSchema` is called (for example, a writer that can only partition by date would expect a partition schema that only contains date information). If the writer rejects the partition schema, then Gobblin will throw an error before writing anything.
+
+Implementing a partitioner
+--------------------------
+
+The interface for a partitioner is
+
+```java
+/**
+ * Partitions records in the writer phase.
+ */
+public interface WriterPartitioner<D> {
+  /**
+   * @return The schema that {@link GenericRecord} returned by {@link #partitionForRecord} will have.
+   */
+  public Schema partitionSchema();
+
+  /**
+   * Returns the partition that the input record belongs to. If
+   * partitionFoRecord(record1).equals(partitionForRecord(record2)), then record1 and record2
+   * belong to the same partition.
+   * @param record input to compute partition for.
+   * @return {@link GenericRecord} representing partition record belongs to.
+   */
+  public GenericRecord partitionForRecord(D record);
+}
+```
+
+For an example of a partitioner implementation see `gobblin.example.wikipedia.WikipediaPartitioner`.
+
+Each class that implements `WriterPartitioner` is required to have a public constructor with signature `(State state, int numBranches, int branchId)`.
+
+Implementing a Partition Aware Writer Builder
+---------------------------------------------
+
+This is very similar to a regular `DataWriterBuilder`, with two differences:
+* You must implement the method `validatePartitionSchema(Schema)` that must return false unless the builder can handle that schema.
+* The field `partition` is available, which is a `GenericRecord` that contains the partition key for the built writer. For any two different keys, Gobblin may create a writer for each key, so it is important that writers for different keys do not collide (e.g. do not try to use the same path).
+
+For an example of a simple `PartitionAwareWriterBuilder` see `gobblin.writer.AvroDataWriterBuilder`.
\ No newline at end of file
diff --git a/Posts.md b/Posts.md
new file mode 100644
index 0000000..b739c26
--- /dev/null
+++ b/Posts.md
@@ -0,0 +1 @@
+* [Gobblin Metrics: next generation instrumentation for applications](https://github.com/linkedin/gobblin/wiki/Gobblin-Metrics:-next-generation-instrumentation-for-applications)
\ No newline at end of file
diff --git a/Publishing-Data-to-S3.md b/Publishing-Data-to-S3.md
new file mode 100644
index 0000000..8e0bfeb
--- /dev/null
+++ b/Publishing-Data-to-S3.md
@@ -0,0 +1,160 @@
+Table of Contents
+--------------------
+
+- [Introduction](#introduction)
+- [Hadoop and S3](#hadoop-and-s3)
+  - [The `s3a` File System](#the-s3a-file-system)
+  - [The `s3` File System](#the-s3-file-system)
+- [Getting Gobblin to Publish to S3](#getting-gobblin-to-publish-to-s3)
+  - [Signing Up For AWS](#signing-up-for-aws)
+  - [Setting Up EC2](#setting-up-ec2)
+    - [Launching an EC2 Instance](#launching-an-ec2-instance)
+    - [EC2 Package Installations](#ec2-package-installations)
+      - [Installing Java](#installing-java)
+  - [Setting Up S3](#setting-up-s3)
+  - [Setting Up Gobblin on EC2](#setting-up-gobblin-on-ec2)
+  - [Configuring Gobblin on EC2](#configuring-gobblin-on-ec2)
+  - [Launching Gobblin on EC2](#launching-gobblin-on-ec2)
+- [Writing to S3 Outside EC2](#writing-to-s3-outside-ec2)
+
+# Introduction
+
+While Gobblin is not tied to any specific cloud provider, [Amazon Web Services](https://aws.amazon.com/) is a popular choice. This document will outline how Gobblin can publish data to [S3](https://aws.amazon.com/s3/). Specifically, it will provide a step by step guide to help setup Gobblin on Amazon [EC2](https://aws.amazon.com/ec2/), run Gobblin on EC2, and publish data from EC2 to S3.
+
+It is recommended to configure Gobblin to first write data to [EBS](https://aws.amazon.com/ebs/), and then publish the data to S3. This is the recommended approach because there are a few caveats when working with with S3. See the [Hadoop and S3](https://github.com/linkedin/gobblin/wiki/Publishing-Data-to-S3#hadoop-and-s3) section for more details.
+
+This document will also provide a step by step guide for launching and configuring an EC2 instance and creating a S3 bucket. However, it is by no means a source of truth guide to working with AWS, it will only provide high level steps. The best place to learn about how to use AWS is through the [Amazon documentation](https://aws.amazon.com/documentation/).
+
+# Hadoop and S3
+
+A majority of Gobblin's code base uses Hadoop's [FileSystem](https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/fs/FileSystem.html) object to read and write data. The `FileSystem` object is an abstract class, and typical implementations either write to the local file system, or write to HDFS. There has been significant work to create an implementation of the `FileSystem` object that reads and writes to S3. The best guide to read about the different S3 `FileSystem` implementations is [here](https://wiki.apache.org/hadoop/AmazonS3).
+
+There are a few different S3 `FileSystem` implementations, the two of note are the `s3a` and the `s3` file systems. The `s3a` file system is relatively new and is only available in Hadoop 2.6.0 (see the original [JIRA](https://issues.apache.org/jira/browse/HADOOP-10400) for more information). The `s3` filesystem has been around for a while.
+
+## The `s3a` File System
+
+The `s3a` file system uploads files to a specified bucket. The data uploaded to S3 via this file system is interoperable with other S3 tools. However, there are a few caveats when working with this file system:
+
+* Since S3 does not support renaming of files in a bucket, the `S3AFileSystem.rename(Path, Path)` operation will actually copy data from the source `Path` to the destination `Path`, and then delete the source `Path` (see the [source code](http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-aws/2.6.0/org/apache/hadoop/fs/s3a/S3AFileSystem.java) for more information)
+* When creating a file using `S3AFileSystem.create(...)` data will be first written to a staging file on the local file system, and when the file is closed, the staging file will be uploaded to S3 (see the [source code](http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-aws/2.6.0/org/apache/hadoop/fs/s3a/S3AOutputStream.java) for more information)
+
+Thus, when using the `s3a` file system with Gobblin it is recommended that one configures Gobblin to first write its staging data to the local filesystem, and then to publish the data to S3. The reason this is the recommended approach is that each Gobblin `Task` will write data to a staging file, and once the file has been completely written it publishes the file to a output directory (it does this by using a rename function). Finally, the `DataPublisher` moves the files from the staging directory to its final directory (again done using a rename function). This requires two renames operations and would be very inefficient if a `Task` wrote directly to S3.
+
+Furthermore, writing directly to S3 requires creating a staging file on the local file system, and then creating a `PutObjectRequest` to upload the data to S3. This is logically equivalent to just configuring Gobblin to write to a local file and then publishing it to S3.
+
+## The `s3` File System
+
+The `s3` file system stores file as blocks, similar to how HDFS stores blocks. This makes renaming of files more efficient, but data written using this file system is not interoperable with other S3 tools. This limitation may make using this file system less desirable, so the majority of this document focuses on the `s3a` file system. Although the majority of the walkthrough should apply for the `s3` file system also.
+
+# Getting Gobblin to Publish to S3
+
+This section will provide a step by step guide to setting up an EC2 instance, a S3 bucket, installing Gobblin on EC2, and configuring Gobblin to publish data to S3.
+
+This guide will use the free-tier provided by AWS to setup EC2 and S3.
+
+## Signing Up For AWS
+
+In order to use EC2 and S3, one first needs to sign up for an AWS account. The easiest way to get started with AWS is to use their [free tier](https://aws.amazon.com/free/).
+
+## Setting Up EC2
+
+### Launching an EC2 Instance
+
+Once you have an AWS account, login to the AWS [console](https://console.aws.amazon.com/console/home). Select the EC2 link, which will bring you to the [EC2 dashboard](https://console.aws.amazon.com/ec2/).
+
+Click on `Launch Instance` to create a new EC2 instance. Before the instance actually starts to run, there area a few more configuration steps necessary:
+
+* Choose an Amazon Machine Image ([AMI](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html))
+    * For this walkthrough we will pick Red Hat Enterprise Linux ([RHEL](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux)) AMI
+* Choose an Instance Type
+    * Since this walkthrough uses the Amazon Free Tier, we will pick the General Purpose `t2.micro` instance
+        * This instance provides us with 1 vCPU and 1 GiB of RAM
+    * For more information on other instance types, check out the AWS [docs](https://aws.amazon.com/ec2/instance-types/)
+* Click Review and Launch
+    * We will use the defaults for all other setting options
+    * When reviewing your instance, you will most likely get a warning saying access to your EC2 instance is open to the world
+    * If you want to fix this you have to edit the [Security Groups](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html); how to do that is out of the scope of this document
+* Set Up SSH Keys
+    * After reviewing your instance, click `Launch`
+    * You should be prompted to setup [SSH](https://en.wikipedia.org/wiki/Secure_Shell) keys
+    * Use an existing key pair if you have one, otherwise create a new one and download it
+* SSH to Launched Instance
+    * SSH using the following command: `ssh -i my-private-key-file.pem ec2-user@instance-name`
+        * The `instance-name` can be taken from the `Public DNS` field from the instance information
+        * SSH may complain that the private key file has insufficient permissions
+            * Execute `chmod 600 my-private-key-file.pem` to fix this
+        * Alternatively, one can modify the `~/.ssh/config` file instead of specifying the `-i` option
+
+After following the above steps, you should be able to freely SSH into the launched EC2 instance, and monitor / control the instance from the [EC2 dashboard](https://console.aws.amazon.com/ec2/).
+
+### EC2 Package Installations
+
+Before setting up Gobblin, you need to install [Java](https://en.wikipedia.org/wiki/Java_(programming_language)) first. Depending on the AMI instance you are running Java may or may not already be installed (you can check if Java is already installed by executing `java -version`).
+
+#### Installing Java
+
+* Execute `sudo yum install java-1.8.0-openjdk*` to install Open JDK 8
+* Confirm the installation was successful by executing `java -version`
+* Set the `JAVA_HOME` environment variable in the `~/.bashrc/` file
+    * The value for `JAVA_HOME` can be found by executing `` readlink `which java` ``
+
+## Setting Up S3
+
+Go to the [S3 dashboard](https://console.aws.amazon.com/s3)
+
+* Click on `Create Bucket`
+    * Enter a name for the bucket (e.g. `gobblin-demo-bucket`)
+    * Enter a [Region](http://docs.aws.amazon.com/general/latest/gr/rande.html) for the bucket (e.g. `US Standard`)
+
+## Setting Up Gobblin on EC2
+
+* Download and Build Gobblin Locally
+    * On your local machine, clone the [Gobblin repository](https://github.com/linkedin/gobblin): `git clone git@github.com:linkedin/gobblin.git` (this assumes you have [Git](https://en.wikipedia.org/wiki/Git_(software)) installed locally)
+    * Build Gobblin using the following commands (it is important to use Hadoop version 2.6.0 as it includes the `s3a` file system implementation):
+```
+cd gobblin
+./gradlew clean build -PuseHadoop2 -PhadoopVersion=2.6.0 -x test
+```
+* Upload the Gobblin Tar to EC2
+    * Execute the command: 
+```
+scp -i my-private-key-file.pem gobblin-dist-[project-version].tar.gz ec2-user@instance-name:
+```
+* Un-tar the Gobblin Distribution
+    * SSH to the EC2 Instance
+    * Un-tar the Gobblin distribution: `tar -xvf gobblin-dist-[project-version].tar.gz`
+* Download AWS Libraries
+    * A few JARs need to be downloaded using some cURL commands:
+```
+curl http://central.maven.org/maven2/com/amazonaws/aws-java-sdk/1.7.4/aws-java-sdk-1.7.4.jar > gobblin-dist/lib/aws-java-sdk-1.7.4.jar
+curl http://central.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.6.0/hadoop-aws-2.6.0.jar > gobblin-dist/lib/hadoop-aws-2.6.0.jar
+```
+
+## Configuring Gobblin on EC2
+
+Assuming we are running Gobblin in [standalone mode](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment), the following configuration options need to be modified in the file `gobblin-dist/conf/gobblin-standalone.properties`.
+
+* Add the key `data.publisher.fs.uri` and set it to `s3a://gobblin-demo-bucket/`
+    * This configures the job to publish data to the S3 bucket named `gobblin-demo-bucket`
+* Add the AWS Access Key Id and Secret Access Key
+    * Set the keys `fs.s3a.access.key` and `fs.s3a.secret.key` to the appropriate values
+    * These keys correspond to [AWS security credentials](http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)
+    * For information on how to get these credentials, check out the AWS documentation [here](http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)
+    * The AWS documentation recommends using [IAM roles](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html); how to set this up is out of the scope of this document; for this walkthrough we will use root access credentials
+
+## Launching Gobblin on EC2
+
+Assuming we want Gobblin to run in standalone mode, follow the usual steps for [standalone deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment#Standalone-Deployment).
+
+For the sake of this walkthrough, we will launch the Gobblin [wikipedia example](https://github.com/linkedin/gobblin/blob/master/gobblin-example/src/main/resources/wikipedia.pull). Directions on how to run this example can be found [here](https://github.com/linkedin/gobblin/wiki/Getting%20Started). The command to launch Gobblin should look similar to:
+```
+sh bin/gobblin-standalone.sh start --workdir /home/ec2-user/gobblin-dist/work --logdir /home/ec2-user/gobblin-dist/logs --conf /home/ec2-user/gobblin-dist/config
+```
+
+If you are running on the Amazon free tier, you will probably get an error in the `nohup.out` file saying there is insufficient memory for the JVM. To fix this add `--jvmflags "-Xms256m -Xmx512m"` to the `start` command.
+
+Data should be written to S3 during the publishing phase of Gobblin. One can confirm data was successfully written to S3 by looking at the [S3 dashboard](https://console.aws.amazon.com/s3).
+
+### Writing to S3 Outside EC2
+
+It is possible to write to an S3 bucket outside of an EC2 instance. The setup steps are similar to walkthrough outlined above. For more information on writing to S3 outside of AWS, check out [this article](https://aws.amazon.com/articles/5050).
\ No newline at end of file
diff --git a/README.md b/README.md
deleted file mode 100644
index 5c98125..0000000
--- a/README.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Gobblin [![Build Status](https://secure.travis-ci.org/linkedin/gobblin.png)](https://travis-ci.org/linkedin/gobblin) [![IRC](https://img.shields.io/badge/irc-%23gobblin-blue.svg)](https://webchat.freenode.net/?channels=gobblin)
-
-Gobblin is a universal data ingestion framework for extracting, transforming, and loading large volume of data from a variety of data sources, e.g., databases, rest APIs, FTP/SFTP servers, filers, etc., onto Hadoop. Gobblin handles the common routine tasks required for all data ingestion ETLs, including job/task scheduling, task partitioning, error handling, state management, data quality checking, data publishing, etc. Gobblin ingests data from different data sources in the same execution framework, and manages metadata of different sources all in one place. This, combined with other features such as auto scalability, fault tolerance, data quality assurance, extensibility, and the ability of handling data model evolution, makes Gobblin an easy-to-use, self-serving, and efficient data ingestion framework.
-
-## Documentation
-
-Check out the Gobblin documentation at [https://github.com/linkedin/gobblin/wiki](https://github.com/linkedin/gobblin/wiki).
-
-## Getting Started
-
-### Building Gobblin
-
-Download or clone the Gobblin repository (say, into `/path/to/gobblin`) and run the following command:
-
-	$ cd /path/to/gobblin
-	$ ./gradlew clean build
-
-After Gobblin is successfully built, you will find a tarball named `gobblin-dist.tar.gz` under the project root directory. Copy the tarball out to somewhere and untar it, and you should see a directory named `gobblin-dist`, which initially contains three directories: `bin`, `conf`, and `lib`. Once Gobblin starts running, a new subdirectory `logs` will be created to store logs.
-
-### Building against a Specific Hadoop Version
-
-Gobblin uses the Hadoop core libraries to talk to HDFS as well as to run on Hadoop MapReduce. Because the protocols have changed in different versions of Hadoop, you must build Gobblin against the same version that your cluster runs. By default, Gobblin is built against version 1.2.1 of Hadoop 1, and against version 2.3.0 of Hadoop 2, but you can choose to build Gobblin against a different version of Hadoop.
-
-The build command above will build Gobblin against the default version 1.2.1 of Hadoop 1. To build Gobblin against a different version of Hadoop 1, e.g., 1.2.0, run the following command:
-
-	$ ./gradlew clean build -PhadoopVersion=1.2.0
-
-To build Gobblin against the default version (2.3.0) of Hadoop 2, run the following command:
-
-	$ ./gradlew clean build -PuseHadoop2
-
-To build Gobblin against a different version of Hadoop 2, e.g., 2.2.0, run the following command:
-
-	$ ./gradlew clean build -PuseHadoop2 -PhadoopVersion=2.2.0
-
-For more information on the different build options for Gobblin, check out the [Gobblin Build Options](https://github.com/linkedin/gobblin/wiki/Gobblin-Build-Options) wiki.
-
-### Running Gobblin
-
-Out of the box, Gobblin can run either in standalone mode on a single box or on Hadoop MapReduce. Please refer to the page [Gobblin Deployment](https://github.com/linkedin/gobblin/wiki/Gobblin%20Deployment) in the documentation for an overview of the deployment modes and how to run Gobblin in different modes.
-
-### Running the Examples
-
-Please refer to the page [Getting Started](https://github.com/linkedin/gobblin/wiki/Getting%20Started)
-in the documentation on how to run the examples.
-
-## Configuration
-
-Please refer to the page [Configuration Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary)
-in the documentation for an overview on the configuration properties of Gobblin.
diff --git a/State-Management-and-Watermarks.md b/State-Management-and-Watermarks.md
new file mode 100644
index 0000000..baedcc1
--- /dev/null
+++ b/State-Management-and-Watermarks.md
@@ -0,0 +1,93 @@
+Table of Contents
+--------------------
+* [1 Managing Watermarks in a Job](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#1-managing-watermarks-in-a-job)
+ * [1.1 Basics](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#11-basics)
+ * [1.2 Task Failures](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#12-task-failures)
+ * [1.3 Multi-Dataset Jobs](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#13-multi-dataset-jobs)
+* [2 Gobblin State Deep Dive](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#2-gobblin-state-deep-dive)
+ * [2.1 `State` class hierarchy](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#21-state-class-hierarchy)
+ * [2.2 How States are Used in a Gobblin Job](https://github.com/linkedin/gobblin/wiki/State-Management-and-Watermarks#22-how-states-are-used-in-a-gobblin-job)
+
+This page has two parts. Section 1 is an instruction on how to carry over checkpoints between two runs of a scheduled batch ingestion job, so that each run can start at where the previous run left off. Section 2 is a deep dive of different types of states in Gobblin and how they are used in a typical job run.
+
+## 1 Managing Watermarks in a Job
+
+When scheduling a Gobblin job to run in batches and pull data incrementally, each run, upon finishing its tasks, should check in the state of its work into the state store, so that the next run can continue the work based on the previous run. This is done through a concept called Watermark.
+
+### 1.1 Basics
+
+**low watermark and expected high watermark**
+
+When the `Source` creates `WorkUnit`s, each `WorkUnit` should generally contain a low watermark and an expected high watermark. They are the start and finish points for the corresponding task, and the task is expected to pull the data from the low watermark to the expected high watermark. 
+
+**actual high watermark**
+
+When a task finishes extracting data, it should write the actual high watermark into its `WorkUnitState`. To do so, the `Extractor` may maintain a `nextWatermark` field, and in `Extractor.close()`, call `this.workUnitState.setActualHighWatermark(this.nextWatermark)`. The actual high Watermark is normally the same as the expected high Watermark if the task completes successfully, and may be smaller than the expected high Watermark if the task failed or timed-out. In some cases, the expected high watermark may not be available so the actual high watermark is the only stat available that tells where the previous run left off. 
+
+In the next run, the `Source` will call `SourceState.getPreviousWorkUnitStates()` which should contain the actual high watermarks the last run checked in, to be used as the low watermarks of the new run.
+
+**watermark type**
+
+A watermark can be of any custom type by implementing the [`Watermark`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/source/extractor/Watermark.java) interface. For example, for Kafka-HDFS ingestion, if each `WorkUnit` is responsible for pulling a single Kafka topic partition, a watermark is a single `long` value representing a Kafka offset. If each `WorkUnit` is responsible for pulling multiple Kafka topic partitions, a watermark can be a list of `long` values, such as [`MultiLongWatermark`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/extract/kafka/MultiLongWatermark.java).
+
+### 1.2 Task Failures
+
+A task may pull some data and then fail. If a task fails and job commit policy specified by configuration property `job.commit.policy` is set to `full`, the data it pulled won't be published. In this case, it doesn't matter what value `Extractor.nextWatermark` is, the actual high watermark will be automatically rolled back to the low watermark by Gobblin internally. On the other hand, if the commit policy is set to `partial`, the failed task may get committed and the data may get published. In this case the `Extractor` is responsible for setting the correct actual high watermark in `Extractor.close()`. Therefore, it is recommended that the `Extractor` update `nextWatermark` every time it pulls a record, so that `nextWatermark` is always up to date (unless you are OK with the next run re-doing the work which may cause some data to be published twice).
+
+### 1.3 Multi-Dataset Jobs
+
+Currently the only state store implementation Gobblin provides is [`FsStateStore`](https://github.com/linkedin/gobblin/blob/master/gobblin-metastore/src/main/java/gobblin/metastore/FsStateStore.java) which uses Hadoop SequenceFiles to store the states. By default, each job run reads the SequenceFile created by the previous run, and generates a new SequenceFile. This creates a pitfall when a job pulls data from multiple datasets: if a data set is skipped in a job run for whatever reason (e.g., it is blacklisted), its watermark will be unavailable for the next run.
+
+**Example**: suppose we schedule a Gobblin job to pull a Kafka topic from a Kafka broker, which has 10 partitions. In this case each partition is a dataset. In one of the job runs, a partition is skipped due to either being blacklisted or some failure. If no `WorkUnit` is created for this partition, this partition's watermark will not be checked in to the state store, and will not be available for the next run.
+
+The are two solutions to the above problem (three if you count the one that implements a different state store that behaves differently and doesn't have this problem).
+
+**Solution 1**: make sure to create a `WorkUnit` for every dataset. Even if a dataset should be skipped, an empty `WorkUnit` should still be created for the dataset ('empty' means low watermark = expected high watermark).
+
+**Solution 2**: use Dataset URNs. When a job pulls multiple datasets, the `Source` class may define a URN for each dataset, e.g., we may use `PageViewEvent.5` as the URN of the 5th partition of topic `PageViewEvent`. When the `Source` creates the `WorkUnit` for this partition, it should set property `dataset.urn` in this `WorkUnit` with value `PageViewEvent.5`. This is the solution gobblin current uses to support jobs pulling data for multiple datasets.
+
+If different `WorkUnit`s have different values of `dataset.urn`, the job will create one state store SequenceFile for each `dataset.urn`. In the next run, instead of calling `SourceState.getPreviousWorkUnitStates()`, one should use `SourceState.getPreviousWorkUnitStatesByDatasetUrns()`. In this way, each run will look for the most recent state store SequenceFile for each dataset, and therefore, even if a dataset is not processed by a job run, its watermark won't be lost.
+
+Note that when using Dataset URNs, **each `WorkUnit` can only have one `dataset.urn`**, which means, for example, in the Kafka ingestion case, each `WorkUnit` can only process one partition. This is usually not a big problem except that it may output too many small files (as explained in [Kafka HDFS ingestion](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion), by having a `WorkUnit` pull multiple partitions of the same topic, these partitions can share output files). On the other hand, different `WorkUnit`s may have the same `dataset.urn`.
+
+## 2 Gobblin State Deep Dive
+
+Gobblin involves several types of states during a job run, such as `JobState`, `TaskState`, `WorkUnit`, etc. They all extend the [`State`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/configuration/State.java) class, which is a wrapper around [`Properties`](https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html) and provides some useful utility functions. 
+
+### 2.1 `State` class hierarchy
+
+<p align="left">
+  <figure>    
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-State-Hierarchy.png alt="Gobblin State Hierarchy" width="400">
+  </figure>
+</p> 
+
+* **`SourceState`, `JobState` and `DatasetState`**: `SourceState` contains properties that define the current job run. It contains properties in the job config file, and the states the previous run persisted in the state store. It is passed to Source to create `WorkUnit`s.
+
+Compared to `SourceState`, a `JobState` also contains properties of a job run such as job ID, starting time, end time, etc., as well as status of a job run, e.g, `PENDING`, `RUNNING`, `COMMITTED`, `FAILED`, etc.
+
+When the data pulled by a job is separated into different datasets (by using `dataset.urn` explained above), each dataset will have a `DatasetState` object in the JobState, and each dataset will persist its states separately.
+
+* **`WorkUnit` and `MultiWorkUnit`**: A `WorkUnit` defines a unit of work. It may contain properties such as which data set to be pulled, where to start (low watermark), where to finish (expected high watermark), among others. A `MultiWorkUnit` contains one or more `WorkUnit`s. All `WorkUnit`s in a `MultiWorkUnit` will be run by a single Task.
+
+The `MultiWorkUnit` is useful for finer-grained control and load balancing. Without `MultiWorkUnit`s, if the number of `WorkUnit`s exceeds the number of mappers in the MR mode, the job launcher can only balance the number of `WorkUnit`s in the mappers. If different `WorkUnit`s have very different workloads (e.g., some pull from very large partitions and others pull from small partitions), this may lead to mapper skew. With `MultiWorkUnit`, if the `Source` class knows or can estimate the workload of the `WorkUnit`s, it can pack a large number of `WorkUnit`s into a smaller number of `MultiWorkUnit`s using its own logic, achieving better load balancing.
+
+* **`WorkUnitState` and `TaskState`**: A `WorkUnitState` contains the runtime properties of a `WorkUnit`, e.g., actual high watermark, as well as the status of a WorkUnit, e.g., `PENDING`, `RUNNING`, `COMMITTED`, `FAILED`, etc. A `TaskState` additionally contains properties of a Task that runs a `WorkUnit`, e.g., task ID, start time, end time, etc.
+
+* **`Extract`**: `Extract` is mainly used for ingesting from databases. It contains properties such as job type (snapshot-only, append-only, snapshot-append), primary keys, delta fields, etc.
+
+### 2.2 How States are Used in a Gobblin Job
+
+* When a job run starts, the job launcher first creates a `JobState`, which contains (1) all properties specified in the job config file, and (2) the `JobState` / `DatasetState` of the previous run, which contains, among other properties, the actual high watermark the previous run checked in for each of its tasks / datasets.
+
+* The job launcher then passes the `JobState` (as a `SourceState` object) to the `Source`, based on which the `Source` will create a set of `WorkUnit`s. Note that when creating `WorkUnit`s, the `Source` should not add properties in `SourceState` into the `WorkUnit`s, which will be done when each `WorkUnit` is executed in a `Task`. The reason is that since the job launcher runs in a single JVM, creating a large number of `WorkUnit`s, each containing a copy of the `SourceState`, may cause OOM.
+
+* The job launcher prepares to run the `WorkUnit`s.
+ * In standalone mode, the job launcher will add properties in the `JobState` into each `WorkUnit` (if a property in `JobState` already exists in the `WorkUnit`, it will NOT be overwritten, i.e., the value in the `WorkUnit` takes precedence). Then for each `WorkUnit` it creates a `Task` to run the `WorkUnit`, and submits all these Tasks to a [`TaskExecutor`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/TaskExecutor.java) which will run these `Task`s in a thread pool.
+ * In MR mode, the job launcher will serialize the `JobState` and each `WorkUnit` into a file, which will be picked up by the mappers. It then creates, configures and submits a Hadoop job.
+
+After this step, the job launcher will be waiting till all tasks finish.
+
+* Each `Task` corresponding to a `WorkUnit` contains a `TaskState`. The `TaskState` initially contains all properties in `JobState` and the corresponding `WorkUnit`, and during the Task run, more runtime properties can be added to `TaskState` by `Extractor`, `Converter` and `Writer`, such as the actual high watermark explained in Section 1.
+
+* After all `Task`s finish, `DatasetState`s will be created from all `TaskState`s based on the `dataset.urn` specified in the `WorkUnit`s. For each dataset whose data is committed, the job launcher will persist its `DatasetState`. If no `dataset.urn` is specified, there will be a single DatasetState, and thus the DatasetState will be persisted if either all `Task`s successfully committed, or some task failed but the commit policy is set to `partial`, in which case the watermarks of these failed tasks will be rolled back, as explained in Section 1.
\ No newline at end of file
diff --git a/gobblin-docs/project/Talks-and-Tech-Blogs.md b/Talks-and-Tech-Blogs.md
similarity index 100%
rename from gobblin-docs/project/Talks-and-Tech-Blogs.md
rename to Talks-and-Tech-Blogs.md
diff --git a/Team.md b/Team.md
new file mode 100644
index 0000000..448f7f5
--- /dev/null
+++ b/Team.md
@@ -0,0 +1,16 @@
+_Current team members:_
+* [Abhishek Tiwari](https://www.linkedin.com/in/abhishektiwari23/en)
+* [Chavdar Botev](https://www.linkedin.com/in/chavdarbotev/en)
+* [Henry Cai](https://www.linkedin.com/pub/henry-haiying-cai/0/246/792/en)
+* [Issac Buenrostro](https://www.linkedin.com/in/ibuenros)
+* [Kapil Surlaker](https://www.linkedin.com/in/kapilsurlaker/en)
+* [Lin Qiao](https://www.linkedin.com/pub/lin-qiao/4/48b/222/en)
+* [Min Tu](https://www.linkedin.com/pub/min-tu/15/643/787/en)
+* [Narasimha Reddy](https://www.linkedin.com/in/narasimhareddyv/en)
+* [Pradhan Cadabam](https://www.linkedin.com/in/pradhancadabam)
+* [Sahil Takiar](https://www.linkedin.com/pub/sahil-takiar/16/862/941/en)
+* [Shirshanka Das](https://www.linkedin.com/in/shirshankadas/en)
+* [Vasanth Rajamani](https://www.linkedin.com/in/vasanth-rajamani-471b84/en)
+* [Yinan Li](https://www.linkedin.com/pub/yinan-li/14/3b2/91a/en)
+* [Ying Dai](https://www.linkedin.com/in/daiying/en)
+* [Ziyang Liu](https://www.linkedin.com/pub/ziyang-liu/37/296/833/en)
\ No newline at end of file
diff --git a/Troubleshooting.md b/Troubleshooting.md
new file mode 100644
index 0000000..4fa6df9
--- /dev/null
+++ b/Troubleshooting.md
@@ -0,0 +1,87 @@
+## Checking Job State
+When there's an issue with a Gobblin job to troubleshoot, it is often helpful to check the state of the job persisted in the state store. Gobblin provides a tool `gobblin-dist/bin/statestore-checker.sh' for checking job states. The tool print job state(s) as a Json document that are easily readable. The usage of the tool is as follows:
+
+```
+usage: statestore-checker.sh
+ -a,--all                                  Whether to convert all past job
+                                           states of the given job
+ -i,--id <gobblin job id>                  Gobblin job id
+ -kc,--keepConfig                          Whether to keep all
+                                           configuration properties
+ -n,--name <gobblin job name>              Gobblin job name
+ -u,--storeurl <gobblin state store URL>   Gobblin state store root path
+                                           URL
+``` 
+
+For example, assume that the state store is located at `file://gobblin/state-store/`, to check the job state of the most recent run of a job named "Foo", run the following command:
+
+```
+statestore-checker.sh -u file://gobblin/state-store/ -n Foo
+``` 
+
+To check the job state of a particular run (say, with job ID job_Foo_123456) of job "Foo", run the following command:
+
+```
+statestore-checker.sh -u file://gobblin/state-store/ -n Foo -i job_Foo_123456
+```
+
+To check the job states of all past runs of job "Foo", run the following command:
+
+```
+statestore-checker.sh -u file://gobblin/state-store/ -n Foo -a
+```
+
+To include job configuration in the output Json document, add option `-kc` or `--keepConfig` in the command.
+
+A sample output Json document is as follows:
+
+```
+{
+	"job name": "GobblinMRTest",
+	"job id": "job_GobblinMRTest_1425622600239",
+	"job state": "COMMITTED",
+	"start time": 1425622600240,
+	"end time": 1425622601326,
+	"duration": 1086,
+	"tasks": 4,
+	"completed tasks": 4,
+	"task states": [
+		{
+			"task id": "task_GobblinMRTest_1425622600239_3",
+			"task state": "COMMITTED",
+			"start time": 1425622600383,
+			"end time": 1425622600395,
+			"duration": 12,
+			"high watermark": -1,
+			"retry count": 0
+		},
+		{
+			"task id": "task_GobblinMRTest_1425622600239_2",
+			"task state": "COMMITTED",
+			"start time": 1425622600354,
+			"end time": 1425622600374,
+			"duration": 20,
+			"high watermark": -1,
+			"retry count": 0
+		},
+		{
+			"task id": "task_GobblinMRTest_1425622600239_1",
+			"task state": "COMMITTED",
+			"start time": 1425622600325,
+			"end time": 1425622600344,
+			"duration": 19,
+			"high watermark": -1,
+			"retry count": 0
+		},
+		{
+			"task id": "task_GobblinMRTest_1425622600239_0",
+			"task state": "COMMITTED",
+			"start time": 1425622600405,
+			"end time": 1425622600421,
+			"duration": 16,
+			"high watermark": -1,
+			"retry count": 0
+		}
+	]
+}
+```
\ No newline at end of file
diff --git a/Working-with-Job-Configuration-Files.md b/Working-with-Job-Configuration-Files.md
new file mode 100644
index 0000000..6cea83e
--- /dev/null
+++ b/Working-with-Job-Configuration-Files.md
@@ -0,0 +1,100 @@
+Table of Contents
+--------------------
+* [Job Configuration Basics](#job-configuration-basics)
+* [Hierarchical Structure of Job Configuration Files](#hierarchical-structure-of-job-configuration-files)
+* [Password Encryption](#password-encryption)
+* [Adding or Changing Job Configuration Files](#adding-or-changing-job-configuration-files)
+* [Scheduled Jobs](#scheduled-jobs)
+* [One Time Jobs](#one-time-jobs)
+* [Disabled Jobs](#disabled-jobs)
+
+Job Configuration Basics
+--------------------
+A Job configuration file is a text file with extension `.pull` or `.job` that defines the job properties that can be loaded into a Java [Properties](http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html) object. Gobblin uses [commons-configuration](http://commons.apache.org/proper/commons-configuration/) to allow variable substitutions in job configuration files. You can find some example Gobblin job configuration files [here](https://github.com/linkedin/gobblin/tree/master/gobblin-core/src/main/resources). 
+
+A Job configuration file typically includes the following properties, in additional to any mandatory configuration properties required by the custom [Gobblin Constructs](https://github.com/linkedin/gobblin/wiki/Gobblin-Architecture#gobblin-constructs) classes. For a complete reference of all configuration properties supported by Gobblin, please refer to [Configuration Properties Glossary](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary).
+
+* `job.name`: job name.
+* `job.group`: the group the job belongs to.
+* `source.class`: the `Source` class the job uses.
+* `converter.classes`: a comma-separated list of `Converter` classes to use in the job. This property is optional.
+* Quality checker related configuration properties: a Gobblin job typically has both row-level and task-level quality checkers specified. Please refer to [Quality Checker Properties](https://github.com/linkedin/gobblin/wiki/Configuration%20Properties%20Glossary#Quality-Checker-Properties) for configuration properties related to quality checkers. 
+
+Hierarchical Structure of Job Configuration Files
+--------------------
+It is often the case that a Gobblin instance runs many jobs and manages the job configuration files corresponding to those jobs. The jobs may belong to different job groups and are for different data sources. It is also highly likely that jobs for the same data source shares a lot of common properties. So it is very useful to support the following features:
+* Job configuration files can be grouped by the job groups they belong to and put into different subdirectories under the root job configuration file directory.
+* Common job properties shared among multiple jobs can be extracted out to a common properties file that will be applied into the job configurations of all these jobs. 
+
+Gobblin supports the above features using a hierarchical structure to organize job configuration files under the root job configuration file directory. The basic idea is that there can be arbitrarily deep nesting of subdirectories under the root job configuration file directory. Each directory regardless how deep it is can have a single `.properties` file storing common properties that will be included when loading the job configuration files under the same directory or in any subdirectories. Below is an example directory structure.
+
+```
+root_job_config_dir/
+  common.properties
+  foo/
+    foo1.job
+    foo2.job
+    foo.properties
+  bar/
+    bar1.job
+    bar2.job
+    bar.properties
+    baz/
+      baz1.pull
+      baz2.pull
+      baz.properties
+```
+
+In this example, `common.properties` will be included when loading `foo1.job`, `foo2.job`, `bar1.job`, `bar2.job`, `baz1.pull`, and `baz2.pull`. `foo.properties` will be included when loading `foo1.job` and `foo2.job` and properties set here are considered more special and will overwrite the same properties defined in `common.properties`. Similarly, `bar.properties` will be included when loading `bar1.job` and `bar2.job`, as well as `baz1.pull` and `baz2.pull`. `baz.properties` will be included when loading `baz1.pull` and `baz2.pull` and will overwrite the same properties defined in `bar.properties` and `common.properties`.
+
+Password Encryption
+--------------------
+To avoid storing passwords in configuration files in plain text, Gobblin supports encryption of the password configuration properties. All such properties can be encrypted (and decrypted) using a master password. The master password is stored in a file available at runtime. The file can be on a local file system or HDFS and has restricted access.
+
+The URI of the master password file is controlled by the configuration option `encrypt.key.loc` . By default, Gobblin will use [org.jasypt.util.password.BasicPasswordEncryptor](http://www.jasypt.org/api/jasypt/1.8/org/jasypt/util/password/BasicPasswordEncryptor.html). If you have installed the [JCE Unlimited Strength Policy](http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html), you can set
+`encrypt.use.strong.encryptor=true` which will configure Gobblin to use [org.jasypt.util.password.StrongPasswordEncryptor](http://www.jasypt.org/api/jasypt/1.8/org/jasypt/util/password/StrongPasswordEncryptor.html).
+
+Encrypted passwords can be generated using the `CLIPasswordEncryptor` tool.
+
+    $ gradle :gobblin-utility:assemble
+    $ cd build/gobblin-utility/distributions/
+    $ tar -zxf gobblin-utility.tar.gz
+    $ bin/gobblin_password_encryptor.sh 
+      usage:
+       -f <master password file>   file that contains the master password used
+                                   to encrypt the plain password
+       -h                          print this message
+       -m <master password>        master password used to encrypt the plain
+                                   password
+       -p <plain password>         plain password to be encrypted
+       -s                          use strong encryptor
+    $ bin/gobblin_password_encryptor.sh -m Hello -p Bye
+    ENC(AQWoQ2Ybe8KXDXwPOA1Ziw==)
+
+If you are extending Gobblin and you want some of your configurations (e.g. the ones containing credentials) to support encryption, you can use `gobblin.password.PasswordManager.getInstance()` methods to get an instance of `PasswordManager`. You can then use `PasswordManager.readPassword(String)` which will transparently decrypt the value if needed, i.e. if it is in the form `ENC(...)` and a master password is provided.
+
+Adding or Changing Job Configuration Files
+--------------------
+The Gobblin job scheduler in the standalone deployment monitors any changes to the job configuration file directory and reloads any new or updated job configuration files when detected. This allows adding new job configuration files or making changes to existing ones without bringing down the standalone instance. Currently, the following types of changes are monitored and supported:
+
+* Adding a new job configuration file with a `.job` or `.pull` extension. The new job configuration file is loaded once it is detected. In the example hierarchical structure above, if a new job configuration file `baz3.pull` is added under `bar/baz`, it is loaded with properties included from `common.properties`, `bar.properties`, and `baz.properties` in that order.
+* Changing an existing job configuration file with a `.job` or `.pull` extension. The job configuration file is reloaded once the change is detected. In the example above, if a change is made to `foo2.job`, it is reloaded with properties included from `common.properties` and `foo.properties` in that order.
+* Changing an existing common properties file with a `.properties` extension. All job configuration files that include properties in the common properties file will be reloaded once the change is detected. In the example above, if `bar.properties` is updated, job configuration files `bar1.job`, `bar2.job`, `baz1.pull`, and `baz2.pull` will be reloaded. Properties from `bar.properties` will be included when loading `bar1.job` and `bar2.job`. Properties from `bar.properties` and `baz.properties` will be included when loading `baz1.pull` and `baz2.pull` in that order.
+
+Note that this job configuration file change monitoring mechanism uses the `FileAlterationMonitor` of Apache's [commons-io](http://commons.apache.org/proper/commons-io/) with a custom `FileAlterationListener`. Regardless of how close two adjacent file system checks are, there are still chances that more than one files are changed between two file system checks. In case more than one file including at least one common properties file are changed between two adjacent checks, the reloading of affected job configuration files may be intermixed and applied in an order that is not desirable. This is because the order the listener is called on the changes is not controlled by Gobblin, but instead by the monitor itself. So the best practice to use this feature is to avoid making multiple changes together in a short period of time.   
+
+Scheduled Jobs
+--------------------
+Gobblin ships with a job scheduler backed by a [Quartz](http://quartz-scheduler.org/) scheduler and supports Quartz's [cron triggers](http://quartz-scheduler.org/generated/2.2.1/html/qs-all/#page/Quartz_Scheduler_Documentation_Set%2Fco-trg_crontriggers.html%23). A job that is to be scheduled should have a cron schedule defined using the property `job.schedule`. Here is an example cron schedule that triggers every two minutes:
+
+```
+job.schedule=0 0/2 * * * ?
+```
+
+One Time Jobs
+--------------------
+Some Gobblin jobs may only need to be run once. A job without a cron schedule in the job configuration is considered a run-once job and will not be scheduled but run immediately after being loaded. A job with a cron schedule but also the property `job.runonce=true` specified in the job configuration is also treated as a run-once job and will only be run the first time the cron schedule is triggered.
+
+Disabled Jobs
+--------------------
+A Gobblin job can be disabled by setting the property `job.disabled` to `true`. A disabled job will not be loaded nor scheduled to run.
\ No newline at end of file
diff --git a/Working-with-the-ForkOperator.md b/Working-with-the-ForkOperator.md
new file mode 100644
index 0000000..69b429a
--- /dev/null
+++ b/Working-with-the-ForkOperator.md
@@ -0,0 +1,183 @@
+Table of Contents
+--------------------
+
+* [Overview of the ForkOperator](#overview-of-the-forkoperator)
+* [Using the ForkOperator](#using-the-forkoperator)
+ * [Basics of Usage](#basics-of-usage)
+ * [Per-Fork Configuration](#per-fork-configuration)
+ * [Failure Semantics](#failure-semantics)
+ * [Performance Tuning](#performance-tuning)
+ * [Comparison with PartitionedDataWriter](#comparison-with-partitioneddatawriter)
+* [Writing your Own ForkOperator](#writing-your-own-forkoperator)
+* [Best Practices](#best-practices)
+* [Example](#example)
+
+Overview of the ForkOperator
+--------------------
+
+The [`ForkOperator`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/fork/ForkOperator.java) is a type of control operators that allow a task flow to branch into multiple streams (or forked branches) as represented by a [`Fork`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/Fork.java), each of which goes to a separately configured sink with its own data writer. The `ForkOperator` gives users more flexibility in terms of controlling where and how ingested data should be output. This is useful for situations, e.g., that data records need to be written into multiple different storages, or that data records need to be written out to the same storage (say, HDFS) but in different forms for different downstream consumers. The best practices of using the `ForkOperator` that we recommend, though, are discussed below. The diagram below illustrates how the `ForkOperator` in a Gobblin task flow allows an input stream to be forked into multiple output streams, each of which can have its own converters, quality checkers, and writers.
+
+<p align="center">
+  <figure>
+    <img src=https://github.com/linkedin/gobblin/wiki/images/Gobblin-Task-Flow.png alt="Gobblin Image" width="500">
+    <figcaption><br>Gobblin task flow.<br></figcaption>
+  </figure>
+</p>
+
+Using the ForkOperator
+--------------------
+
+### Basics of Usage
+
+The [`ForkOperator`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/fork/ForkOperator.java), like most other operators in a Gobblin task flow, is pluggable through the configuration, or more specifically , the configuration property `fork.operator.class` that points to a class that implements the `ForkOperator` interface. For instance:
+
+```
+fork.operator.class=gobblin.fork.IdentityForkOperator
+```
+
+By default, if no `ForkOperator` class is specified, internally Gobblin uses the default implementation [`IdentityForkOperator`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/fork/IdentityForkOperator.java) with a single forked branch (although it does supports multiple forked branches). The `IdentityForkOperator` simply unconditionally forwards the schema and ingested data records to all the forked branches, the number of which is specified through the configuration property `fork.branches` with a default value of 1. When an `IdentityForkOperator` instance is initialized, it will read the value of `fork.branches` and use that as the return value of `getBranches`.   
+
+The _expected_ number of forked branches is given by the method `getBranches` of the `ForkOperator`. This number must match the size of the list of `Boolean`s returned by `forkSchema` as well as the size of the list of `Boolean`s returned by `forkDataRecords`. Otherwise, `ForkBranchMismatchException` will be thrown. Note that the `ForkOperator` itself _is not making and returning a copy_ for the input schema and data records, but rather just providing a `Boolean` for each forked branch telling if the schema or data records should be in each particular branch. Each forked branch has a branch index starting at 0. So if there are three forked branches, the branches will have indices 0, 1, and 2, respectively. Branch indices are useful to tell which branch the Gobblin task flow is in. Each branch also has a name associated with it that can be specified using the configuration property `fork.branch.name.<branch index>`. Note that the branch index is added as a suffix to the property name in this case. More on this later. If the user does not specify a name for the branches, the names in the form `fork_<branch index>` will be used.   
+
+The use of the `ForkOperator` with _the possibility that the schema and/or data records may be forwarded to more than one forked branches_ has some special requirement on the input schema and data records to the `ForkOperator`. Specifically, because the same schema or data records may be forwarded to more than branches that may alter the schema or data records in place, it is necessary for the Gobblin task flow to make a copy of the input schema or data records for each forked branch so any modification within one branch won't affect any other branches. 
+
+To guarantee that it is always able to make a copy in such a case, Gobblin requires the input schema and data records to be of type `Copyable` when there are more than one forked branch. [`Copyable`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/fork/Copyable.java) is an interface that defines a method `copy` for making a copy of an instance of a given type. The Gobblin task flow will check if the input schema and data records are instances of `Copyable` and throw a `CopyNotSupportedException` if not. This check is performed independently on the schema first and data records subsequently. Note that this requirement is enforced _if and only if the schema or data records are to be forwarded to more than one branches_, which is the case if `forkSchema` or `forkDataRecord` returns a list containing more than one `TRUE`. Having more than one branch does not necessarily mean the schema and/or data records need to be `Copyable`.
+
+Gobblin ships with some built-in `Copyable` implementations, e.g., [`CopyableSchema`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/fork/CopyableSchema.java) and [`CopyableGenericRecord`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/fork/CopyableGenericRecord.java) for Avro's `Schema` and `GenericRecord`.   
+
+### Per-Fork Configuration
+
+Since each forked branch may have it's own converters, quality checkers, and writers, in addition to the ones in the pre-fork stream (which does not have a writer apparently), there must be a way to tell the converter, quality checker, and writer classes of one branch from another and from the pre-fork stream. Gobblin uses a pretty straightforward approach: if a configuration property is used to specify something for a branch in a multi-branch use case, _the branch index should be appended as a suffix_ to the property name. The original configuration name without the suffix is _generally reserved for the pre-fork stream_. For example, `converter.classes.0` and `converter.classes.1` are used to specify the list of converter classes for branch 0 and 1, respectively, whereas `converter.classes` is reserved for the pre-fork stream. If there's only a single branch (the default case), then the index suffix is not applicable. Without being a comprehensive list, the following groups of built-in configuration properties may be used with branch indices as suffices to specify things for forked branches:
+
+* Converter configuration properties: configuration properties whose names start with `converter`.
+* Quality checker configuration properties: configuration properties whose names start with `qualitychecker`.
+* Writer configuration properties: configuration properties whose names start with `writer`.
+
+### Failure Semantics
+
+In a normal task flow where the default `IdentityForkOperator` with a single branch is used, the failure of the single branch also means the failure of the task flow. When there are more than one forked branch, however, the failure semantics are more involved. Gobblin uses the following failure semantics in this case: 
+
+* The failure of any forked branch means the failure of the whole task flow, i.e., the task succeeds if and only if all the forked branches succeed.
+* A forked branch stops processing any outstanding incoming data records in the queue if it fails in the middle of processing the data.   
+* The failure and subsequent stop/completion of any forked branch does not prevent other branches from processing their copies of the ingested data records. The task will wait until all the branches to finish, regardless if they succeed or fail.   
+* The commit of output data of forks is determined by the job commit policy (see [`JobCommitPolicy`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/source/extractor/JobCommitPolicy.java)) specified. If `JobCommitPolicy.COMMIT_ON_FULL_SUCCESS` (or `full` in short) is used, the output data of the entire job will be discarded if any forked branch fails, which will fail the task and consequently the job. If instead `JobCommitPolicy.COMMIT_SUCCESSFUL_TASKS` (or `successful` in short) is used, output data of tasks whose forked branches all succeed will be committed. Output data of any task that has _at least one failed forked branch_ will not be committed since the the task is considered failed in this case. This also means output data of the successful forked branches of the task won't be committed either.
+  
+### Performance Tuning
+
+Internally, each forked branch as represented by a [`Fork`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/Fork.java) maintains a bounded record queue (implemented by [`BoundedBlockingRecordQueue`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/BoundedBlockingRecordQueue.java)), which serves as a buffer between the pre-fork stream and the forked stream of the particular branch. The size if this bounded record queue can be configured through the property `fork.record.queue.capacity`. A larger queue allows for more data records to be buffered therefore giving the producer (the pre-fork stream) more head room to move forward. On the other hand, a larger queue requires more memory. The bounded record queue imposes a timeout time on all blocking operations such as putting a new record to the tail and polling a record off the head of the queue. Tuning the queue size and timeout time together offers a lot of flexibility and a tradeoff between queuing performance vs. memory consumption.
+
+In terms of the number of forked branches, we have seen use cases with a half dozen forked branches, and we are anticipating uses cases with much larger numbers. Again, when using a large number of forked branches, the size of the record queues and the timeout time need to be carefully tuned. 
+
+The [`BoundedBlockingRecordQueue`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/BoundedBlockingRecordQueue.java) in each [`Fork`](https://github.com/linkedin/gobblin/blob/master/gobblin-runtime/src/main/java/gobblin/runtime/Fork.java) keeps trach of the following queue statistics that can be output to the logs if the `DEBUG` logging level is turned on. Those statistics provide good indications on the performance of the forks.
+
+* Queue size, i.e., the number of records in queue.
+* Queue fill ratio, i.e., a ratio of the number of records in queue over the queue capacity.
+* Put attempt rate (per second).
+* Total put attempt count.
+* Get attempt rate (per second).
+* Total get attempt count. 
+
+### Comparison with PartitionedDataWriter
+
+Gobblin ships with a special type of `DataWriter`s called [`PartitionedDataWriter`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/writer/PartitionedDataWriter.java) that allow ingested records to be written in a partitioned fashion using a `WriterPartitioner` into different locations in the same sink. The `WriterPartitioner` determines the specific partition for each data record. So there's certain overlap in terms of functionality between the `ForkOperator` and `PartitionedDataWriter`. The question is which one should be used under which circumstances? Below is a summary of the major differences between the two operators.
+
+* The `ForkOperator` requires the number of forked branches to be known and returned through `getBranches` before the task starts, whereas the `PartitionedDataWriter` does not have this requirement.
+* The `PartitionedDataWriter` writes each data record to a single partition, whereas the `ForkOperator` allows data records to be forwarded to any number of forked branches.
+* The `ForkOperator` allows the use of additional converters and quality checkers in any forked branches before data gets written out. The `PartitionedDataWriter` is the last operator in a task flow.
+* Use of the `ForkOperator` allows data records to be written to different sinks, whereas the `PartitionedDataWriter` is not capable of doing this.
+* The `PartitionedDataWriter` writes data records sequentially in a single thread, whereas use of the `ForkOperator` allows forked branches to write independently in parallel since `Fork`s are executed in a thread pool.  
+
+Writing your Own ForkOperator
+--------------------
+
+Since the built-in default implementation [`IdentityForkOperator`](https://github.com/linkedin/gobblin/blob/master/gobblin-core/src/main/java/gobblin/fork/IdentityForkOperator.java) simply blindly forks the input schema and data records to every branches, it's often necessary to have a custom implementation of the `ForkOperator` interface for more fine-grained control on the actual branching. Checkout the interface [`ForkOperator`](https://github.com/linkedin/gobblin/blob/master/gobblin-api/src/main/java/gobblin/fork/ForkOperator.java) for the methods that need to be implemented. You will also find the [`ForkOperatorUtils`](https://github.com/linkedin/gobblin/blob/master/gobblin-utility/src/main/java/gobblin/util/ForkOperatorUtils.java) to be handy when writing your own `ForkOperator` implementations.
+
+Best Practices
+--------------------
+
+The `ForkOperator` can have many potential use cases and we have seen the following common ones:
+
+* Using a `ForkOperator` to write the same ingested data to multiple sinks, e.g., HDFS and S3, possibly in different formats. This kind of use cases is often referred to as "dual writes", which are _generally NOT recommended_ as "dual writes" may lead to data inconsistency between the sinks in case of write failures. However, with the failure semantics discussed above, data inconsistency generally should not happen with the job commit policy `JobCommitPolicy.COMMIT_ON_FULL_SUCCESS` or `JobCommitPolicy.COMMIT_SUCCESSFUL_TASKS`. This is because a failure of any forked branch means the failure of the task and none of the forked branches of the task will have its output data committed, making inconsistent output data between different sinks impossible.  
+* Using a `ForkOperator` to process ingested data records in different ways conditionally. For example, a `ForkOperator` may be used to classify and write ingested data records to different places on HDFS depending on some field in the data that serves as a classifier.
+* Using a `ForkOperator` to group ingested data records of a certain schema type in case the incoming stream mixes data records of different schema types. For example, we have seen a use case in which a single Kafka topic is used for records of various schema types and when data gets ingested to HDFS, the records need to be written to different paths according to their schema types.
+
+Generally, a common use case of the `ForkOperator` is to route ingested data records so they get written to different output locations _conditionally_. The `ForkOperator` also finds common usage for "dual writes" to different sinks potentially in different formats if the job commit policy `JobCommitPolicy.COMMIT_ON_FULL_SUCCESS` (or `full` in short) or `JobCommitPolicy.COMMIT_SUCCESSFUL_TASKS` (or `successful` in short) is used, as explained above. 
+
+Example
+--------------------
+
+Let's take a look at one example that shows how to work with the `ForkOperator` for a real use case. Say you have a Gobblin job that ingests Avro data from a data source that may have some sensitive data in some of the fields that need to be purged. Depending on if data records have sensitive data, they need to be written to different locations on the same sink, which we assume is HDFS. So essentially the tasks of the job need a mechanism to conditionally write ingested data records to different locations depending if they have sensitive data. The `ForkOperator` offers a way of implementing this mechanism. 
+
+In this particular use case, we need a `ForkOperator` implementation of two branches that forwards the schema to both branches but each data record to only one of the two branches. The default `IdentityForkOperator` cannot be used since it simply forwards every data records to every branches. So we need a custom implementation of the `ForkOperator` and let's simply call it `SensitiveDataAwareForkOperator` under the package `gobblin.example.fork`. Let's also assume that branch 0 is for data records with sensitive data, whereas branch 1 is for data records without. Below is a brief sketch of how the implementation looks like:
+
+```
+public class SensitiveDataAwareForkOperator implements ForkOperator<Schema, GenericRecord> {
+  
+  private static final int NUM_BRANCHES = 2;
+
+  @Override
+  public void init(WorkUnitState workUnitState) {
+  }
+
+  @Override
+  public int getBranches(WorkUnitState workUnitState) {
+    return NUM_BRANCHES;
+  }
+
+  @Override
+  public List<Boolean> forkSchema(WorkUnitState workUnitState, Schema schema) {
+    // The schema goes to both branches.
+    return ImmutableList.of(Boolean.TRUE, Boolean.TRUE);
+  }
+
+  @Override
+  public List<Boolean> forkDataRecord(WorkUnitState workUnitState, GenericRecord record) {
+    // Data records only go to one of the two branches depending on if they have sensitive data.
+    // Branch 0 is for data records with sensitive data and branch 1 is for data records without.
+    // hasSensitiveData checks the record and returns true of the record has sensitive data and false otherwise.
+    if (hasSensitiveData(record)) {
+      return ImmutableList.of(Boolean.TRUE, Boolean.FALSE)      
+    }
+
+    return ImmutableList.of(Boolean.FALSE, Boolean.TRUE);
+  }
+
+  @Override
+  public void close() throws IOException {
+  }
+}
+```
+
+To make the example more concrete, let's assume that the job uses some converters and quality checkers before the schema and data records reach the `SensitiveDataAwareForkOperator`, and it also uses a converter to purge the sensitive fields and a quality checker that makes sure some mandatory fields exist for purged data records in branch 0. Both branches will be written to the same HDFS but into different locations.
+
+```
+fork.operator.class=gobblin.example.fork.SensitiveDataAwareForkOperator
+
+# Pre-fork or non-fork-specific configuration properties
+converter.classes=<Converter classes used in the task flow prior to OutlierAwareForkOperator>
+qualitychecker.task.policies=gobblin.policies.count.RowCountPolicy,gobblin.policies.schema.SchemaCompatibilityPolicy
+qualitychecker.task.policy.types=OPTIONAL,OPTIONAL
+data.publisher.type=gobblin.publisher.BaseDataPublisher
+
+# Configuration properties for branch 0
+converter.classes.0=gobblin.example.converter.PurgingConverter
+qualitychecker.task.policies.0=gobblin.example,policies.MandatoryFieldExistencePolicy
+qualitychecker.task.policy.types.0=FAILED
+writer.fs.uri.0=hdfs://<namenode host>:<namenode port>/
+writer.destination.type.0=HDFS
+writer.output.format.0=AVRO
+writer.staging.dir.0=/gobblin/example/task-staging/purged
+writer.output.dir.0=/gobblin/example/task-output/purged
+data.publisher.final.dir.0=/gobblin/example/job-output/purged
+
+# Configuration properties for branch 1
+writer.fs.uri.1=hdfs://<namenode host>:<namenode port>/
+writer.destination.type.1=HDFS
+writer.output.format.1=AVRO
+writer.staging.dir.1=/gobblin/example/task-staging/normal
+writer.output.dir.1=/gobblin/example/task-output/normal
+data.publisher.final.dir.1=/gobblin/example/job-output/normal
+``` 
+
+   
+
diff --git a/Writing-Parquet-Data.md b/Writing-Parquet-Data.md
new file mode 100644
index 0000000..30404ce
--- /dev/null
+++ b/Writing-Parquet-Data.md
@@ -0,0 +1 @@
+TODO
\ No newline at end of file
diff --git a/_Footer.md b/_Footer.md
new file mode 100644
index 0000000..4e85d50
--- /dev/null
+++ b/_Footer.md
@@ -0,0 +1 @@
+<p align="center"><a href="https://github.com/linkedin/gobblin">Source</a> | <a href="https://github.com/linkedin/gobblin/wiki">Documentation</a> | <a href="https://groups.google.com/forum/#!forum/gobblin-users">Discussion Group</a></p>
\ No newline at end of file
diff --git a/_Sidebar.md b/_Sidebar.md
new file mode 100644
index 0000000..364ba99
--- /dev/null
+++ b/_Sidebar.md
@@ -0,0 +1,43 @@
+* [Home](Home)
+* [Getting Started](Getting Started) 
+* [Architecture](Gobblin-Architecture)
+* User Guide
+  * [Working with Job Configuration Files](Working-with-Job-Configuration-Files)
+  * [Deployment](Gobblin Deployment)
+  * [Gobblin on Yarn](Gobblin-on-Yarn)
+  * [Compaction](Compaction)
+  * [State Management and Watermarks] (State-Management-and-Watermarks)
+  * [Working with the ForkOperator](Working-with-the-ForkOperator)
+  * [Configuration Glossary](Configuration Properties Glossary)
+  * [Partitioned Writers](Partitioned Writers)
+  * [Monitoring](Monitoring)
+  * [Schedulers](https://github.com/linkedin/gobblin/wiki/Gobblin-Schedulers)
+  * [Job Execution History Store](Job Execution History Store)
+  * [Gobblin Build Options](https://github.com/linkedin/gobblin/wiki/Gobblin-Build-Options)
+  * [Troubleshooting](Troubleshooting)
+  * [FAQs] (FAQs)
+* Case Studies
+  * [Kafka-HDFS Ingestion](https://github.com/linkedin/gobblin/wiki/Kafka-HDFS-Ingestion)
+  * [Publishing Data to S3](https://github.com/linkedin/gobblin/wiki/Publishing-Data-to-S3)
+* Gobblin Metrics
+  * [Quick Start](Gobblin Metrics)
+  * [Existing Reporters](Existing Reporters)
+  * [Metrics for Gobblin ETL](Metrics for Gobblin ETL)
+  * [Gobblin Metrics Architecture](Gobblin Metrics Architecture)
+  * [Implementing New Reporters](Implementing New Reporters)
+  * [Gobblin Metrics Performance](Gobblin Metrics Performance)
+* Developer Guide
+  * [Customization: New Source](Customization for New Source)
+  * [Customization: Converter/Operator](Customization for Converter and Operator)
+  * [Code Style Guide](CodingStyle)
+  * [IDE setup](IDE-setup)
+  * [Monitoring Design](Monitoring-Design)
+* Project
+  * [Feature List](Feature List)
+  * [Contributors/Team](Team)
+  * [Talks/Tech Blogs](Talks and Tech Blogs)
+  * [News/Roadmap](News)
+  * [Posts](Posts)
+* Miscellaneous
+  * [Camus → Gobblin Migration](https://github.com/linkedin/gobblin/wiki/Camus-%E2%86%92-Gobblin-Migration)
+  * [Exactly Once Support](https://github.com/linkedin/gobblin/wiki/Exactly-Once-Support)
\ No newline at end of file
diff --git a/bin/gobblin-admin.sh b/bin/gobblin-admin.sh
deleted file mode 100755
index 7902742..0000000
--- a/bin/gobblin-admin.sh
+++ /dev/null
@@ -1,125 +0,0 @@
-#!/bin/bash
-
-function print_usage() {
-  echo "gobblin-admin.sh [JAVA_OPTION] COMMAND [OPTION]"
-  echo "Where JAVA_OPTION can be:"
-  echo "  --fwdir <fwd dir>                              Gobblin's dist directory: if not set, taken from \${GOBBLIN_FWDIR}"
-  echo "  --logdir <log dir>                             Gobblin's log directory: if not set, taken from \${GOBBLIN_LOG_DIR}"
-  echo "  --jars <comma-separated list of job jars>      Job jar(s): if not set, \${GOBBLIN_FWDIR/lib} is examined"
-  echo "  --help                                         Display this help and exit"
-  echo "COMMAND is one of the following:"
-  echo "  jobs|tasks"
-  echo "And OPTION are any options associated with the command, as specified by the CLI."
-}
-
-# Print an error message and exit
-function die() {
-  echo -e "\nError: $@\n" 1>&2
-  print_usage
-  exit 1
-}
-
-for i in "$@"
-do
-  case "$1" in
-    jobs|tasks)
-      ACTION="$1"
-      break
-      ;;
-    --fwdir)
-      FWDIR="$2"
-      shift 2
-      ;;
-    --logdir)
-      LOG_DIR="$2"
-      shift 2
-      ;;
-    --jars)
-      JARS="$2"
-      shift 2
-      ;;
-    --help)
-      print_usage
-      exit 0
-      ;;
-    *)
-      ;;
-  esac
-done
-
-if [ -z "$ACTION" ]; then
-  print_usage
-  exit 0
-fi
-
-# Source gobblin default vars
-[ -f /etc/default/gobblin ] && . /etc/default/gobblin
-
-if [ -z "$JAVA_HOME" ]; then
-  die "Environment variable JAVA_HOME not set!"
-fi
-
-if [ -n "$FWDIR" ]; then
-  export GOBBLIN_FWDIR="$FWDIR"
-fi
-
-if [ -z "$GOBBLIN_FWDIR" ]; then
-  die "Environment variable FWDIR not set!"
-fi
-
-FWDIR_LIB=$GOBBLIN_FWDIR/lib
-FWDIR_CONF=$GOBBLIN_FWDIR/conf
-
-# User defined log directory overrides $GOBBLIN_LOG_DIR
-if [ -n "$LOG_DIR" ]; then
-  export GOBBLIN_LOG_DIR="$LOG_DIR"
-fi
-
-if [ -z "$GOBBLIN_LOG_DIR" ]; then
-  die "GOBBLIN_LOG_DIR is not set!"
-fi
-
-CONFIG_FILE=$FWDIR_CONF/gobblin-standalone.properties
-
-set_user_jars(){
-  local separator=''
-  if [ -n "$1" ]; then
-    IFS=','
-    read -ra userjars <<< "$1"
-    for userjar in ${userjars[@]}; do
-      add_user_jar "$userjar"
-     done
-    unset IFS
-  fi
-}
-
-add_user_jar(){
-  local dirname=`dirname "$1"`
-  local jarname=`basename "$1"`
-  dirname=`cd "$dirname">/dev/null; pwd`
-  GOBBLIN_JARS+="$separator$dirname/$jarname"
-  separator=':'
-}
-
-# Add the absoulte path of the user defined job jars to the GOBBLIN_JARS
-set_user_jars "$JARS"
-
-for jar in $(ls -d $FWDIR_LIB/*); do
-  if [ "$GOBBLIN_JARS" != "" ]; then
-    GOBBLIN_JARS+=":$jar"
-  else
-    GOBBLIN_JARS=$jar
-  fi
-done
-
-CLASSPATH=":$GOBBLIN_JARS:$FWDIR_CONF"
-
-COMMAND="$JAVA_HOME/bin/java -Xmx1024m -Xms256m "
-COMMAND+="-XX:+UseCompressedOops "
-COMMAND+="-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$GOBBLIN_LOG_DIR/ "
-COMMAND+="-Xloggc:$GOBBLIN_LOG_DIR/gobblin-gc.log "
-COMMAND+="-Dgobblin.logs.dir=$GOBBLIN_LOG_DIR "
-COMMAND+="-Dlog4j.configuration=file://$FWDIR_CONF/log4j-standalone.xml "
-COMMAND+="-cp $CLASSPATH "
-COMMAND+="gobblin.cli.Cli $@"
-$COMMAND
\ No newline at end of file
diff --git a/bin/gobblin-env.sh b/bin/gobblin-env.sh
deleted file mode 100755
index 384eafb..0000000
--- a/bin/gobblin-env.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/bash
-
-# Set Gobblin specific environment variables here.
diff --git a/bin/gobblin-mapreduce.sh b/bin/gobblin-mapreduce.sh
deleted file mode 100755
index 8777f89..0000000
--- a/bin/gobblin-mapreduce.sh
+++ /dev/null
@@ -1,182 +0,0 @@
-#!/bin/bash
-
-##############################################################
-############### Run Gobblin Jobs on Hadoop MR ################
-##############################################################
-
-# Set during the distribution build
-GOBBLIN_VERSION=@project.version@
-
-FWDIR="$(cd `dirname $0`/..; pwd)"
-FWDIR_LIB=$FWDIR/lib
-FWDIR_CONF=$FWDIR/conf
-FWDIR_BIN=$FWDIR/bin
-
-function print_usage(){
-  echo "Usage: gobblin-mapreduce.sh [OPTION] --conf <job configuration file>"
-  echo "Where OPTION can be:"
-  echo "  --jt <job tracker / resource manager URL>      Job submission URL: if not set, taken from \${HADOOP_HOME}/conf"
-  echo "  --fs <file system URL>                         Target file system: if not set, taken from \${HADOOP_HOME}/conf"
-  echo "  --jars <comma-separated list of job jars>      Job jar(s): if not set, \"$FWDIR_LIB\" is examined"
-  echo "  --workdir <job work dir>                       Gobblin's base work directory: if not set, taken from \${GOBBLIN_WORK_DIR}"
-  echo "  --projectversion <version>                     Gobblin version to be used. If set, overrides the distribution build version"
-  echo "  --logdir <log dir>                             Gobblin's log directory: if not set, taken from \${GOBBLIN_LOG_DIR} if present. Otherwise \"$FWDIR/logs\" is used"
-  echo "  --help                                         Display this help and exit"
-}
-
-# Print an error message and exit
-function die() {
-  echo -e "\nError: $@\n" 1>&2
-  print_usage
-  exit 1
-}
-
-for i in "$@"
-do
-  case "$1" in
-    --jt)
-      JOB_TRACKER_URL="$2"
-      shift
-      ;;
-    --fs)
-      FS_URL="$2"
-      shift
-      ;;
-    --jars)
-      JARS="$2"
-      shift
-      ;;
-    --workdir)
-      WORK_DIR="$2"
-      shift
-      ;;
-    --logdir)
-      LOG_DIR="$2"
-      shift
-      ;;
-    --conf)
-      JOB_CONFIG_FILE="$2"
-      shift
-      ;;
-    --projectversion)
-      GOBBLIN_VERSION="$2"
-      shift
-      ;;
-    --help)
-      print_usage
-      exit 0
-      ;;
-    *)
-      ;;
-  esac
-  shift
-done
-
-if ( [ -z "$GOBBLIN_VERSION" ] || [ "$GOBBLIN_VERSION" == "@project.version@" ] ); then
-  die "Gobblin project version is not set!"
-fi
-
-if [ -z "$JOB_CONFIG_FILE" ]; then
-  die "No job configuration file set!"
-fi
-
-# User defined work directory overrides $GOBBLIN_WORK_DIR
-if [ -n "$WORK_DIR" ]; then
-  export GOBBLIN_WORK_DIR="$WORK_DIR"
-fi
-
-if [ -z "$GOBBLIN_WORK_DIR" ]; then
-  die "GOBBLIN_WORK_DIR is not set!"
-fi
-
-# User defined log directory overrides $GOBBLIN_LOG_DIR
-if [ -n "$LOG_DIR" ]; then
-  export GOBBLIN_LOG_DIR="$LOG_DIR"
-fi
-
-if [ -z "$GOBBLIN_LOG_DIR" ]; then
-  GOBBLIN_LOG_DIR="$FWDIR/logs"
-fi
-
-. $FWDIR_BIN/gobblin-env.sh
-
-USER_JARS=""
-separator=''
-set_user_jars(){
-  if [ -n "$1" ]; then
-    IFS=','
-    read -ra userjars <<< "$1"
-    for userjar in ${userjars[@]}; do
-      add_user_jar "$userjar"
-     done
-    unset IFS
-  fi
-}
-
-add_user_jar(){
-  local dirname=`dirname "$1"`
-  local jarname=`basename "$1"`
-  dirname=`cd "$dirname">/dev/null; pwd`
-  USER_JARS+="$separator$dirname/$jarname"
-  separator=','
-}
-
-# Add the absolute path of the user defined job jars to the LIBJARS first
-set_user_jars "$JARS"
-
-# Jars Gobblin runtime depends on
-# Please note that both versions of the metrics jar are required.
-function join { local IFS="$1"; shift; echo "$*"; }
-LIBJARS=(
-  $USER_JARS
-  $FWDIR_LIB/gobblin-metastore-$GOBBLIN_VERSION.jar
-  $FWDIR_LIB/gobblin-metrics-$GOBBLIN_VERSION.jar
-  $FWDIR_LIB/gobblin-core-$GOBBLIN_VERSION.jar
-  $FWDIR_LIB/gobblin-api-$GOBBLIN_VERSION.jar
-  $FWDIR_LIB/gobblin-utility-$GOBBLIN_VERSION.jar
-  $FWDIR_LIB/guava-15.0.jar
-  $FWDIR_LIB/avro-1.7.7.jar
-  $FWDIR_LIB/avro-mapred-1.7.7-hadoop2.jar
-  $FWDIR_LIB/commons-lang3-3.4.jar
-  $FWDIR_LIB/config-1.2.1.jar
-  $FWDIR_LIB/data-1.15.9.jar
-  $FWDIR_LIB/gson-2.3.1.jar
-  $FWDIR_LIB/joda-time-2.9.jar
-  $FWDIR_LIB/kafka_2.11-0.8.2.1.jar
-  $FWDIR_LIB/kafka-clients-0.8.2.1.jar
-  $FWDIR_LIB/metrics-core-2.2.0.jar
-  $FWDIR_LIB/metrics-core-3.1.0.jar
-  $FWDIR_LIB/metrics-graphite-3.1.0.jar
-  $FWDIR_LIB/scala-library-2.11.6.jar
-)
-LIBJARS=$(join , "${LIBJARS[@]}")
-
-# Add libraries to the Hadoop classpath
-GOBBLIN_DEP_JARS=`echo "$USER_JARS" | tr ',' ':' `
-for jarFile in `ls $FWDIR_LIB/*`
-do
-  GOBBLIN_DEP_JARS=${GOBBLIN_DEP_JARS}:$jarFile
-done
-
-# Honor Gobblin dependencies
-export HADOOP_USER_CLASSPATH_FIRST=true
-export HADOOP_CLASSPATH=$GOBBLIN_DEP_JARS:$HADOOP_CLASSPATH
-
-GOBBLIN_CONFIG_FILE=$FWDIR_CONF/gobblin-mapreduce.properties
-
-JT_COMMAND=$([ -z $JOB_TRACKER_URL ] && echo "" || echo "-jt $JOB_TRACKER_URL")
-FS_COMMAND=$([ -z $FS_URL ] && echo "" || echo "-fs $FS_URL")
-
-export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Dgobblin.logs.dir=$GOBBLIN_LOG_DIR -Dlog4j.configuration=file:$FWDIR_CONF/log4j-mapreduce.xml"
-
-# Launch the job to run on Hadoop
-$HADOOP_BIN_DIR/hadoop jar \
-        $FWDIR_LIB/gobblin-runtime-$GOBBLIN_VERSION.jar \
-        gobblin.runtime.mapreduce.CliMRJobLauncher \
-        -D mapreduce.user.classpath.first=true \
-        -D mapreduce.job.user.classpath.first=true \
-        $JT_COMMAND \
-        $FS_COMMAND \
-        -libjars $LIBJARS \
-        -sysconfig $GOBBLIN_CONFIG_FILE \
-        -jobconfig $JOB_CONFIG_FILE
diff --git a/bin/gobblin-standalone.sh b/bin/gobblin-standalone.sh
deleted file mode 100755
index 74f7a89..0000000
--- a/bin/gobblin-standalone.sh
+++ /dev/null
@@ -1,245 +0,0 @@
-#!/bin/bash
-
-function print_usage(){
-  echo "gobblin-standalone.sh <start | status | restart | stop> [OPTION]"
-  echo "Where OPTION can be:"
-  echo "  --workdir <job work dir>                       Gobblin's base work directory: if not set, taken from \${GOBBLIN_WORK_DIR}"
-  echo "  --fwdir <fwd dir>                              Gobblin's dist directory: if not set, taken from \${GOBBLIN_FWDIR}"
-  echo "  --logdir <log dir>                             Gobblin's log directory: if not set, taken from \${GOBBLIN_LOG_DIR}"
-  echo "  --jars <comma-separated list of job jars>      Job jar(s): if not set, "$FWDIR_LIB" is examined"
-  echo "  --conf <directory of job configuration files>  Directory of job configuration files: if not set, taken from ${GOBBLIN_JOB_CONFIG_DIR}"
-  echo "  --conffile <custom config file>                Custom config file: if not set, is ignored. Overwrites properties in "$FWDIR_CONF/gobblin-standalone.properties
-  echo "  --jvmflags <string of jvm flags>               String containing any additional JVM flags to include"
-  echo "  --help                                         Display this help and exit"
-}
-
-# Print an error message and exit
-function die() {
-  echo -e "\nError: $@\n" 1>&2
-  print_usage
-  exit 1
-}
-
-for i in "$@"
-do
-  case "$1" in
-    start|stop|restart|status)
-      ACTION="$1"
-      ;;
-    --workdir)
-      WORK_DIR="$2"
-      shift
-      ;;
-    --fwdir)
-      FWDIR="$2"
-      shift
-      ;;
-    --logdir)
-      GOBBLIN_LOG_DIR="$2"
-      shift
-      ;;
-    --jars)
-      JARS="$2"
-      shift
-      ;;
-    --conf)
-      JOB_CONFIG_DIR="$2"
-      shift
-      ;;
-    --conffile)
-      CUSTOM_CONFIG_FILE="$2"
-      shift
-      ;;
-    --jvmflags)
-      JVM_FLAGS="$2"
-      shift
-      ;;
-    --help)
-      print_usage
-      exit 0
-      ;;
-    *)
-      ;;
-  esac
-  shift
-done
-
-if [ -z "$JAVA_HOME" ]; then
-  die "Environment variable JAVA_HOME not set!"
-fi
-
-check=false
-if [ "$ACTION" == "start" ] || [ "$ACTION" == "restart" ]; then
-  check=true
-fi
-
-if [ -n "$FWDIR" ]; then
-  export GOBBLIN_FWDIR="$FWDIR"
-fi
-
-if [ -z "$GOBBLIN_FWDIR" ] && [ "$check" == true ]; then
-  GOBBLIN_FWDIR="$(cd `dirname $0`/..; pwd)"
-fi
-
-FWDIR_LIB=$GOBBLIN_FWDIR/lib
-FWDIR_CONF=$GOBBLIN_FWDIR/conf
-
-# User defined job configuration directory overrides $GOBBLIN_JOB_CONFIG_DIR
-if [ -n "$JOB_CONFIG_DIR" ]; then
-  export GOBBLIN_JOB_CONFIG_DIR="$JOB_CONFIG_DIR"
-fi
-
-if [ -z "$GOBBLIN_JOB_CONFIG_DIR" ] && [ "$check" == true ]; then
-  die "Environment variable GOBBLIN_JOB_CONFIG_DIR not set!"
-fi
-
-# User defined work directory overrides $GOBBLIN_WORK_DIR
-if [ -n "$WORK_DIR" ]; then
-  export GOBBLIN_WORK_DIR="$WORK_DIR"
-fi
-
-if [ -z "$GOBBLIN_WORK_DIR" ] && [ "$check" == true ]; then
-  die "GOBBLIN_WORK_DIR is not set!"
-fi
-
-# User defined log directory overrides $GOBBLIN_LOG_DIR
-if [ -n "$LOG_DIR" ]; then
-  export GOBBLIN_LOG_DIR="$LOG_DIR"
-fi
-
-if [ -z "$GOBBLIN_LOG_DIR" ] && [ "$check" == true ]; then
-  GOBBLIN_LOG_DIR="$GOBBLIN_FWDIR/logs"
-fi
-
-# User defined JVM flags overrides $GOBBLIN_JVM_FLAGS (if any)
-if [ -n "$JVM_FLAGS" ]; then
-  export GOBBLIN_JVM_FLAGS="$JVM_FLAGS"
-fi
-
-# User defined configuration file overrides $GOBBLIN_CUSTOM_CONFIG_FILE
-if [ -n "$CUSTOM_CONFIG_FILE" ]; then
-  export GOBBLIN_CUSTOM_CONFIG_FILE="$CUSTOM_CONFIG_FILE"
-fi
-
-DEFAULT_CONFIG_FILE=$FWDIR_CONF/gobblin-standalone.properties
-
-PID="$GOBBLIN_WORK_DIR/.gobblin-pid"
-
-if [ -f "$PID" ]; then
-  PID_VALUE=`cat $PID` > /dev/null 2>&1
-else
-  PID_VALUE=""
-fi
-
-if [ ! -d "$GOBBLIN_LOG_DIR" ]; then
-  mkdir "$GOBBLIN_LOG_DIR"
-fi
-
-set_user_jars(){
-  local separator=''
-  if [ -n "$1" ]; then
-    IFS=','
-    read -ra userjars <<< "$1"
-    for userjar in ${userjars[@]}; do
-      add_user_jar "$userjar"
-     done
-    unset IFS
-  fi
-}
-
-add_user_jar(){
-  local dirname=`dirname "$1"`
-  local jarname=`basename "$1"`
-  dirname=`cd "$dirname">/dev/null; pwd`
-  GOBBLIN_JARS+="$separator$dirname/$jarname"
-  separator=':'
-}
-
-# Add the absoulte path of the user defined job jars to the GOBBLIN_JARS
-set_user_jars "$JARS"
-
-start() {
-  for jar in $(ls -d $FWDIR_LIB/*); do
-    if [ "$GOBBLIN_JARS" != "" ]; then
-      GOBBLIN_JARS+=":$jar"
-    else
-      GOBBLIN_JARS=$jar
-    fi
-  done
-
-  CLASSPATH="$GOBBLIN_JARS:$FWDIR_CONF"
-
-  echo "Starting Gobblin standalone daemon"
-  COMMAND="$JAVA_HOME/bin/java -Xmx2g -Xms1g "
-  COMMAND+="-XX:+UseConcMarkSweepGC -XX:+UseParNewGC "
-  COMMAND+="-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution "
-  COMMAND+="-XX:+UseCompressedOops "
-  COMMAND+="-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$GOBBLIN_LOG_DIR/ "
-  COMMAND+="-Xloggc:$GOBBLIN_LOG_DIR/gobblin-gc.log "
-  COMMAND+="-Dgobblin.logs.dir=$GOBBLIN_LOG_DIR "
-  COMMAND+="-Dlog4j.configuration=file://$FWDIR_CONF/log4j-standalone.xml "
-  COMMAND+="-cp $CLASSPATH "
-  COMMAND+="-Dorg.quartz.properties=$FWDIR_CONF/quartz.properties "
-  COMMAND+="$GOBBLIN_JVM_FLAGS "
-  COMMAND+="gobblin.scheduler.SchedulerDaemon $DEFAULT_CONFIG_FILE $GOBBLIN_CUSTOM_CONFIG_FILE"
-  echo "Running command:"
-  echo "$COMMAND"
-  nohup $COMMAND & echo $! > $PID
-}
-
-stop() {
-  if [ -f "$PID" ]; then
-    if kill -0 $PID_VALUE > /dev/null 2>&1; then
-      echo 'Stopping Gobblin standalone daemon'
-      kill $PID_VALUE
-      sleep 1
-      if kill -0 $PID_VALUE > /dev/null 2>&1; then
-        echo "Gobblin standalone daemon did not stop gracefully, killing with kill -9"
-        kill -9 $PID_VALUE
-      fi
-    else
-      echo "Process $PID_VALUE is not running"
-    fi
-  else
-    echo "No pid file found"
-  fi
-}
-
-# Check the status of the process
-status() {
-  if [ -f "$PID" ]; then
-    echo "Looking into file: $PID"
-    if kill -0 $PID_VALUE > /dev/null 2>&1; then
-      echo "Gobblin standalone daemon is running with status: "
-      ps -ef | grep -v grep | grep $PID_VALUE
-    else
-      echo "Gobblin standalone daemon is not running"
-      exit 1
-    fi
-  else
-    echo "No pid file found"
-    exit 1
-  fi
-}
-
-case "$ACTION" in
-  "start")
-    start
-    ;;
-  "status")
-    status
-    ;;
-  "restart")
-    stop
-    echo "Sleeping..."
-    sleep 1
-    start
-    ;;
-  "stop")
-    stop
-    ;;
-  *)
-    print_usage
-    exit 1
-    ;;
-esac
diff --git a/bin/gobblin-yarn.sh b/bin/gobblin-yarn.sh
deleted file mode 100755
index 2d20dbc..0000000
--- a/bin/gobblin-yarn.sh
+++ /dev/null
@@ -1,113 +0,0 @@
-#!/usr/bin/env bash
-
-# Print an error message and exit
-function die() {
-  echo -e "\nError: $@\n" 1>&2
-  print_usage
-  exit 1
-}
-
-function print_usage() {
-  echo "gobblin-yarn.sh <start | stop>"
-  echo "Where OPTION can be:"
-  echo "  --jvmflags <string of jvm flags>               String containing any additional JVM flags to include"
-  echo "  --jars <column-separated list of extra jars>   Column-separated list of extra jars to put on the CLASSPATH"
-  echo "  --help                                         Display this help and exit"
-}
-
-function start() {
-  for jarFile in `ls ${FWDIR_LIB}/*`
-  do
-    GOBBLIN_JARS=${GOBBLIN_JARS}:${jarFile}
-  done
-
-  export HADOOP_USER_CLASSPATH_FIRST=true
-
-  CLASSPATH=${FWDIR_CONF}:${GOBBLIN_JARS}:${YARN_CONF_DIR}:${HADOOP_YARN_HOME}/lib
-  if [ -n "$EXTRA_JARS" ]; then
-    CLASSPATH=$CLASSPATH:"$EXTRA_JARS"
-  fi
-
-  COMMAND="$JAVA_HOME/bin/java -cp $CLASSPATH $JVM_FLAGS gobblin.yarn.GobblinYarnAppLauncher"
-
-  echo "Running command:"
-  echo "$COMMAND"
-  nohup $COMMAND & echo $! > $PID
-}
-
-function stop() {
-  if [ -f "$PID" ]; then
-    if kill -0 $PID_VALUE > /dev/null 2>&1; then
-      echo 'Stopping the Gobblin Yarn application'
-      kill $PID_VALUE
-    else
-      echo "Process $PID_VALUE is not running"
-    fi
-  else
-    echo "No pid file found"
-  fi
-}
-
-FWDIR="$(cd `dirname $0`/..; pwd)"
-FWDIR_LIB=${FWDIR}/lib
-FWDIR_CONF=${FWDIR}/conf/yarn
-FWDIR_BIN=${FWDIR}/bin
-
-. ${FWDIR_BIN}/gobblin-env.sh
-
-for i in "$@"
-do
-  case "$1" in
-    start|stop)
-      ACTION="$1"
-      shift
-      ;;
-    --jvmflags)
-      JVM_FLAGS="$1"
-      shift
-      ;;
-    --jars)
-      EXTRA_JARS="$1"
-      shift
-      ;;
-    --help)
-      print_usage
-      exit 0
-      ;;
-    *)
-      ;;
-  esac
-  shift
-done
-
-if [ -z "$JAVA_HOME" ]; then
-  die "Environment variable JAVA_HOME not set!"
-fi
-
-# User defined JVM flags overrides $GOBBLIN_JVM_FLAGS (if any)
-if [ -n "$JVM_FLAGS" ]; then
-  JVM_FLAGS="-Xmx1g -Xms512m"
-fi
-
-PID="$FWDIR/.gobblin-yarn-app-pid"
-
-if [ -f "$PID" ]; then
-  PID_VALUE=`cat $PID` > /dev/null 2>&1
-else
-  PID_VALUE=""
-fi
-
-case "$ACTION" in
-  "start")
-    start
-    ;;
-  "stop")
-    stop
-    ;;
-  *)
-    print_usage
-    exit 1
-    ;;
-esac
-
-
diff --git a/bin/statestore-checker.sh b/bin/statestore-checker.sh
deleted file mode 100755
index 0b0fe67..0000000
--- a/bin/statestore-checker.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-
-FWDIR="$(cd `dirname $0`/..; pwd)"
-
-GOBBLIN_JARS=""
-for jar in $(ls -d $FWDIR/lib/*); do
-  if [ "$GOBBLIN_JARS" != "" ]; then
-    GOBBLIN_JARS+=":$jar"
-  else
-    GOBBLIN_JARS=$jar
-  fi
-done
-
-CLASSPATH=$GOBBLIN_JARS
-CLASSPATH+=":$FWDIR/conf"
-
-java -cp $CLASSPATH gobblin.runtime.util.JobStateToJsonConverter $@
diff --git a/bin/statestore-cleaner.sh b/bin/statestore-cleaner.sh
deleted file mode 100755
index 796e624..0000000
--- a/bin/statestore-cleaner.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/bin/bash
-
-FWDIR="$(cd `dirname $0`/..; pwd)"
-
-GOBBLIN_JARS=""
-for jar in $(ls -d $FWDIR/lib/*); do
-  if [ "$GOBBLIN_JARS" != "" ]; then
-    GOBBLIN_JARS+=":$jar"
-  else
-    GOBBLIN_JARS=$jar
-  fi
-done
-
-CLASSPATH=$GOBBLIN_JARS
-CLASSPATH+=":$FWDIR/conf"
-
-java -cp $CLASSPATH gobblin.metastore.util.StateStoreCleaner $@
diff --git a/build.gradle b/build.gradle
deleted file mode 100644
index 637d081..0000000
--- a/build.gradle
+++ /dev/null
@@ -1,543 +0,0 @@
-// Copyright (C) 2014-2016 LinkedIn Corp. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-// this file except in compliance with the License. You may obtain a copy of the
-// License at  http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software distributed
-// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-// CONDITIONS OF ANY KIND, either express or implied.
-
-buildscript {
-  repositories {
-    maven {
-      url "https://plugins.gradle.org/m2/"
-    }
-  }
-  dependencies {
-    classpath 'gradle.plugin.org.inferred:gradle-processors:1.1.2'
-  }
-}
-
-apply plugin: 'org.inferred.processors'
-apply plugin: 'idea'
-
-idea.project {
-  ext.languageLevel = JavaVersion.VERSION_1_7
-}
-
-ext.build_script_dir = "${projectDir.path}/build_script"
-ext.isDefaultEnvironment = !project.hasProperty('overrideBuildEnvironment')
-
-File getEnvironmentScript()
-{
-  final File env = file(isDefaultEnvironment ? 'defaultEnvironment.gradle' : project.overrideBuildEnvironment)
-  assert env.isFile() : "The environment script [$env] does not exists or is not a file."
-  return env
-}
-
-apply from: environmentScript
-
-ext.publishToMaven = project.hasProperty('publishToMaven')
-if (ext.publishToMaven) {
-    plugins.apply('maven')
-    // Workaround for a bug in gradle's "maven" plugin. See https://discuss.gradle.org/t/error-in-parallel-build/7215/3
-    project.setProperty("org.gradle.parallel", "false")
-}
-
-ext.signArtifacts = !project.hasProperty('doNotSignArtifacts')
-
-if (!project.hasProperty('group') || project.group.length() == 0) {
-    project.group = 'com.linkedin.gobblin'
-}
-
-if (!project.hasProperty('artifactRepository') || project.artifactRepository.length() == 0) {
-    ext.artifactRepository = "https://oss.sonatype.org/service/local/staging/deploy/maven2/"
-}
-
-if (!project.hasProperty('artifactSnapshotRepository') || project.artifactSnapshotRepository.length() == 0) {
-    ext.artifactSnapshotRepository = "https://oss.sonatype.org/content/repositories/snapshots/"
-}
-
-if (!project.hasProperty('version') || project.version == 'unspecified') {
-    exec {
-        commandLine 'git', 'fetch', '-t', 'https://github.com/linkedin/gobblin.git', 'master'
-    }
-    def versionOut = new ByteArrayOutputStream()
-    exec {
-        commandLine 'git', 'describe', '--tags', '--always'
-        standardOutput versionOut
-    }
-    def tagStr = versionOut.toString().trim()
-    println 'Using latest tag for version: ' + tagStr
-    if (tagStr.startsWith("gobblin_")) {
-        project.version = tagStr.substring(8)
-    }
-    else {
-        project.version = tagStr
-    }
-    if (!project.hasProperty('useHadoop2')) {
-      project.version = project.version + "-hadoop1"
-    }
-}
-
-println "name=" + project.name + " group=" + project.group
-println "project.version=" + project.version
-
-if (!project.hasProperty('hadoopVersion')) {
-  if (project.hasProperty('useHadoop2')) {
-    ext.hadoopVersion = '2.3.0'
-  } else {
-    ext.hadoopVersion = '1.2.1'
-  }
-}
-
-if (!project.hasProperty('hiveVersion')) {
-  ext.hiveVersion = '1.0.1'
-}
-
-if (!project.hasProperty('pegasusVersion')) {
-  ext.pegasusVersion = '1.15.9'
-}
-
-if (!project.hasProperty('bytemanVersion')) {
-  ext.bytemanVersion = '2.2.1'
-}
-
-ext.avroVersion = '1.7.7'
-ext.dropwizardMetricsVersion = '3.1.0'
-ext.findBugsVersion = '3.0.0'
-
-ext.externalDependency = [
-  "antlrRuntime": "org.antlr:antlr-runtime:3.5.2",
-  "avro": "org.apache.avro:avro:" + avroVersion,
-  "avroMapredH1": "org.apache.avro:avro-mapred:" + avroVersion + ":hadoop1",
-  "avroMapredH2": "org.apache.avro:avro-mapred:" + avroVersion + ":hadoop2",
-  "commonsCli": "commons-cli:commons-cli:1.3.1",
-  "commonsCodec": "commons-codec:commons-codec:1.10",
-  "commonsDbcp": "commons-dbcp:commons-dbcp:1.4",
-  "commonsEmail": "org.apache.commons:commons-email:1.4",
-  "commonsLang": "commons-lang:commons-lang:2.6",
-  "commonsLang3": "org.apache.commons:commons-lang3:3.4",
-  "commonsConfiguration": "commons-configuration:commons-configuration:1.10",
-  "commonsIo": "commons-io:commons-io:2.4",
-  "commonsMath": "org.apache.commons:commons-math3:3.5",
-  "commonsHttpClient": "commons-httpclient:commons-httpclient:3.1",
-  "commonsCompress":"org.apache.commons:commons-compress:1.10",
-  "commonsPool": "org.apache.commons:commons-pool2:2.4.2",
-  "datanucleusCore": "org.datanucleus:datanucleus-core:3.2.10",
-  "datanucleusRdbms": "org.datanucleus:datanucleus-rdbms:3.2.9",
-  "guava": "com.google.guava:guava:15.0",
-  "gson": "com.google.code.gson:gson:2.6.1",
-  "findBugsAnnotations": "com.google.code.findbugs:jsr305:" + findBugsVersion,
-  "hadoop": "org.apache.hadoop:hadoop-core:" + hadoopVersion,
-  "hadoopCommon": "org.apache.hadoop:hadoop-common:" + hadoopVersion,
-  "hadoopClientCore": "org.apache.hadoop:hadoop-mapreduce-client-core:" + hadoopVersion,
-  "hadoopClientCommon": "org.apache.hadoop:hadoop-mapreduce-client-common:" + hadoopVersion,
-  "hadoopHdfs": "org.apache.hadoop:hadoop-hdfs:" + hadoopVersion,
-  "hadoopAuth": "org.apache.hadoop:hadoop-auth:" + hadoopVersion,
-  "hadoopYarnApi": "org.apache.hadoop:hadoop-yarn-api:" + hadoopVersion,
-  "hadoopYarnCommon": "org.apache.hadoop:hadoop-yarn-common:" + hadoopVersion,
-  "hadoopYarnClient": "org.apache.hadoop:hadoop-yarn-client:" + hadoopVersion,
-  "hadoopYarnMiniCluster": "org.apache.hadoop:hadoop-minicluster:" + hadoopVersion,
-  "hadoopAnnotations": "org.apache.hadoop:hadoop-annotations:" + hadoopVersion,
-  "hadoopAws": "org.apache.hadoop:hadoop-aws:2.6.0",
-  "hiveCommon": "org.apache.hive:hive-common:" + hiveVersion,
-  "hiveService": "org.apache.hive:hive-service:" + hiveVersion,
-  "hiveJdbc": "org.apache.hive:hive-jdbc:" + hiveVersion,
-  "hiveMetastore": "org.apache.hive:hive-metastore:" + hiveVersion,
-  "hiveExec": "org.apache.hive:hive-exec:" + hiveVersion + ":core",
-  "hiveSerDe": "org.apache.hive:hive-serde:" + hiveVersion,
-  "httpclient": "org.apache.httpcomponents:httpclient:4.5",
-  "httpcore": "org.apache.httpcomponents:httpcore:4.4.1",
-  "kafka": "org.apache.kafka:kafka_2.11:0.8.2.1",
-  "kafkaTest": "org.apache.kafka:kafka_2.11:0.8.2.1:test",
-  "kafkaClient": "org.apache.kafka:kafka-clients:0.8.2.1",
-  "quartz": "org.quartz-scheduler:quartz:2.2.1",
-  "testng": "org.testng:testng:6.9.6",
-  "mockserver":"org.mock-server:mockserver-netty:3.10.1",
-  "jacksonCore": "org.codehaus.jackson:jackson-core-asl:1.9.13",
-  "jacksonMapper": "org.codehaus.jackson:jackson-mapper-asl:1.9.13",
-  "jasypt": "org.jasypt:jasypt:1.9.2",
-  "slf4j": "org.slf4j:slf4j-api:1.7.16",
-  "log4j": "log4j:log4j:1.2.17",
-  "log4jextras": "log4j:apache-log4j-extras:1.2.17",
-  "slf4jLog4j": "org.slf4j:slf4j-log4j12:1.7.12",
-  "jodaTime": "joda-time:joda-time:2.9.2",
-  "metricsCore": "io.dropwizard.metrics:metrics-core:" + dropwizardMetricsVersion,
-  "metricsJvm": "io.dropwizard.metrics:metrics-jvm:" + dropwizardMetricsVersion,
-  "metricsGraphite": "io.dropwizard.metrics:metrics-graphite:" + dropwizardMetricsVersion,
-  "jsch": "com.jcraft:jsch:0.1.53",
-  "jdo2": "javax.jdo:jdo2-api:2.1",
-  "azkaban": "com.linkedin.azkaban:azkaban:2.5.0",
-  "commonsVfs": "org.apache.commons:commons-vfs2:2.0",
-  "mysqlConnector": "mysql:mysql-connector-java:5.1.36",
-  "javaxInject": "javax.inject:javax.inject:1",
-  "guice": "com.google.inject:guice:3.0",
-  "derby": "org.apache.derby:derby:10.11.1.1",
-  "mockito": "org.mockito:mockito-core:1.10.19",
-  "salesforceWsc": "com.force.api:force-wsc:29.0.0",
-  "salesforcePartner": "com.force.api:force-partner-api:29.0.0",
-  "scala": "org.scala-lang:scala-library:2.11.6",
-  "influxdbJava": "org.influxdb:influxdb-java:1.5",
-  "libthrift":"org.apache.thrift:libthrift:0.9.3",
-  "lombok":"org.projectlombok:lombok:1.16.4",
-  "mockRunnerJdbc":"com.mockrunner:mockrunner-jdbc:1.0.8",
-  "xerces":"xerces:xercesImpl:2.11.0",
-  "typesafeConfig": "com.typesafe:config:1.2.1",
-  "byteman": "org.jboss.byteman:byteman:" + bytemanVersion,
-  "bytemanBmunit": "org.jboss.byteman:byteman-bmunit:" + bytemanVersion,
-  "bcpgJdk15on": "org.bouncycastle:bcpg-jdk15on:1.52",
-  "bcprovJdk15on": "org.bouncycastle:bcprov-jdk15on:1.52",
-  "calciteCore": "org.apache.calcite:calcite-core:1.2.0-incubating",
-  "calciteAvatica": "org.apache.calcite:calcite-avatica:1.2.0-incubating",
-  "jhyde": "org.pentaho:pentaho-aggdesigner-algorithm:5.1.5-jhyde",
-  "curatorFramework": "org.apache.curator:curator-framework:2.8.0",
-  "curatorTest": "org.apache.curator:curator-test:2.8.0",
-  "hamcrest": "org.hamcrest:hamcrest-all:1.3",
-  "joptSimple": "net.sf.jopt-simple:jopt-simple:4.9",
-  "protobuf": "com.google.protobuf:protobuf-java:2.6.1",
-  "pegasus" : [
-    "data" : "com.linkedin.pegasus:data:" + pegasusVersion,
-    "generator" : "com.linkedin.pegasus:generator:" + pegasusVersion,
-    "restliClient" : "com.linkedin.pegasus:restli-client:" + pegasusVersion,
-    "restliServer" : "com.linkedin.pegasus:restli-server:" + pegasusVersion,
-    "restliTools" : "com.linkedin.pegasus:restli-tools:" + pegasusVersion,
-    "pegasusCommon" : "com.linkedin.pegasus:pegasus-common:" + pegasusVersion,
-    "restliCommon" : "com.linkedin.pegasus:restli-common:" + pegasusVersion,
-    "r2" : "com.linkedin.pegasus:r2:" + pegasusVersion,
-    "d2" : "com.linkedin.pegasus:d2:" + pegasusVersion,
-    "restliNettyStandalone" : "com.linkedin.pegasus:restli-netty-standalone:" + pegasusVersion
-  ],
-  "jetty": [
-          "org.eclipse.jetty:jetty-server:9.2.14.v20151106",
-          "org.eclipse.jetty:jetty-servlet:9.2.14.v20151106"
-  ],
-  "servlet-api": "javax.servlet:servlet-api:3.1.0",
-  "reflections" : "org.reflections:reflections:0.9.9"
-];
-
-if (!isDefaultEnvironment)
-{
-  ext.externalDependency.each { overrideDepKey, overrideDepValue ->
-    if (externalDependency[overrideDepKey] != null)
-    {
-      externalDependency[overrideDepKey] = overrideDepValue
-    }
-  }
-}
-
-task wrapper(type: Wrapper) { gradleVersion = '1.12' }
-
-import javax.tools.ToolProvider
-
-task javadocTarball(type: Tar) {
-  baseName = "gobblin-javadoc-all"
-  destinationDir = new File(project.buildDir, baseName)
-  compression = Compression.GZIP
-  extension = 'tgz'
-  description = "Generates a tar-ball with all javadocs to ${destinationDir}/${archiveName}"
-}
-
-javadocTarball << {
-  def indexFile = new File(destinationDir, "index.md")
-  def version = rootProject.ext.javadocVersion
-  indexFile << """----
-layout: page
-title: Gobblin Javadoc packages ${version}
-permalink: /javadoc/${version}/
-----
-
-"""
-  rootProject.ext.javadocPackages.each {
-    indexFile << "* [${it}](${it})\n"
-  }
-}
-
-// Javadoc initialization for subprojects
-ext.javadocVersion = null != project.version ? project.version.toString() : "latest"
-if (ext.javadocVersion.indexOf('-') > 0) {
-  // Remove any "-" addons from the version
-  ext.javadocVersion = javadocVersion.substring(0, javadocVersion.indexOf('-'))
-}
-
-ext.javadocPackages = new HashSet<String>()
-subprojects.each{Project pr ->
-  if (file(pr.projectDir.absolutePath + "/src/main/java").exists()) {
-    rootProject.ext.javadocPackages += pr.name
-  }
-}
-
-subprojects {
-  plugins.withType(JavaPlugin) {
-
-    // Sometimes generating javadocs can lead to OOM. This may needs to be increased.
-    // Also force javadocs to pick up system proxy settings if available
-    javadoc {
-      options.jFlags('-Xmx256m', '-Djava.net.useSystemProxies=true');
-    }
-
-    rootProject.tasks.javadocTarball.dependsOn project.tasks.javadoc
-    if ( rootProject.ext.javadocPackages.contains(project.name)) {
-      rootProject.tasks.javadocTarball.into(project.name){from(fileTree(dir: "${project.buildDir}/docs/javadoc/"))}
-    }
-  }
-}
-
-ext.pomAttributes = {
-  name "${project.name}"
-  packaging 'jar'
-  // optionally artifactId can be defined here
-  description 'Gobblin Ingestion Framework'
-  url 'https://github.com/linkedin/gobblin/'
-
-  scm {
-    connection 'scm:git:git@github.com:linkedin/gobblin.git'
-    developerConnection 'scm:git:git@github.com:linkedin/gobblin.git'
-    url 'git@github.com:linkedin/gobblin.git'
-  }
-
-  licenses {
-    license {
-      name 'The Apache License, Version 2.0'
-      url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
-    }
-  }
-
-  developers {
-    developer {
-      name 'Abhishek Tiwari'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Chavdar Botev'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Issac Buenrostro'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Min Tu'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Narasimha Veeramreddy'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Pradhan Cadabam'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Sahil Takiar'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Shirshanka Das'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Yinan Li'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Ying Dai'
-      organization 'LinkedIn'
-    }
-    developer {
-      name 'Ziyang Liu'
-      organization 'LinkedIn'
-    }
-  }
-}
-
-subprojects {
-  plugins.withType(JavaPlugin) {
-    plugins.apply('idea')
-    plugins.apply('eclipse')
-    plugins.apply('maven')
-    plugins.apply('findbugs')
-
-    sourceCompatibility = JavaVersion.VERSION_1_7
-
-    findbugs {
-      toolVersion = findBugsVersion
-      ignoreFailures = true
-      effort = "max"
-      // The exclude filter file must be under "ligradle/findbugs/"
-      excludeFilter = file(rootProject.projectDir.path + "/ligradle/findbugs/findbugsExclude.xml")
-    }
-
-    test {
-      if (project.hasProperty("printTestOutput")) {
-        testLogging.showStandardStreams = true
-      }
-      useTestNG () {
-        excludeGroups 'ignore', 'performance'
-        if (project.hasProperty('skipTestGroup')) {
-          excludeGroups skipTestGroup
-        }
-        if (project.hasProperty('useHadoop2')) {
-          excludeGroups 'Hadoop1Only'
-        }
-      }
-    }
-
-    configurations {
-      compile
-      dependencies {
-        if (project.hasProperty('useHadoop2')) {
-          compile(externalDependency.hadoopCommon) {
-            exclude module: 'servlet-api'
-          }
-          compile externalDependency.hadoopClientCore
-          compile externalDependency.hadoopAnnotations
-          if (project.name.equals('gobblin-runtime') || project.name.equals('gobblin-test')) {
-            compile externalDependency.hadoopClientCommon
-          }
-        } else {
-          compile externalDependency.hadoop
-        }
-        compile(externalDependency.guava) {
-          force = true
-        }
-
-        // Required to add JDK's tool jar, which is required to run byteman tests.
-        testCompile (files(((URLClassLoader) ToolProvider.getSystemToolClassLoader()).getURLs()))
-      }
-    }
-
-    if (isDefaultEnvironment) {
-      task sourcesJar(type: Jar, dependsOn: classes) {
-        from sourceSets.main.allSource
-        classifier = 'sources'
-      }
-      task javadocJar(type: Jar) {
-        from javadoc
-        classifier = 'javadoc'
-      }
-      artifacts { archives sourcesJar, javadocJar }
-    }
-
-    plugins.apply('maven')
-
-    project.version = rootProject.version
-    project.group = rootProject.group
-
-    install {
-      repositories {
-        mavenInstaller {
-          mavenLocal()
-          pom.project {
-            name "${project.name}"
-            packaging 'jar'
-            description 'Gobblin Ingestion Framework'
-            url 'https://github.com/linkedin/gobblin/'
-          }
-        }
-      }
-    }
-
-    // Publishing of maven artifacts for subprojects
-    if (rootProject.ext.publishToMaven) {
-      if (rootProject.ext.signArtifacts) {
-        plugins.apply('signing')
-      }
-
-      uploadArchives {
-        repositories {
-          mavenDeployer {
-            beforeDeployment { MavenDeployment deployment ->
-              if (rootProject.ext.signArtifacts) {
-                signing.signPom(deployment)
-              }
-            }
-
-            repository(url: rootProject.artifactRepository) {
-              authentication(userName: ossrhUsername, password: ossrhPassword)
-            }
-
-            snapshotRepository(url: rootProject.artifactSnapshotRepository) {
-              authentication(userName: ossrhUsername, password: ossrhPassword)
-            }
-
-            pom.project pomAttributes
-          }
-        }
-      }
-
-      if (rootProject.ext.signArtifacts) {
-        signing {
-          sign configurations.archives
-        }
-      }
-    }
-
-    // Configure the IDEA plugin to (1) add the codegen as source dirs and (2) work around
-    // an apparent bug in the plugin which doesn't set the outputDir/testOutputDir as documented
-    idea.project {
-      ext.languageLevel = JavaVersion.VERSION_1_7
-    }
-    idea.module {
-      // Gradle docs claim the two settings below are the default, but
-      // the actual defaults appear to be "out/production/$MODULE_NAME"
-      // and "out/test/$MODULE_NAME". Changing it so IDEA and gradle share
-      // the class output directory.
-
-      outputDir = sourceSets.main.output.classesDir
-      testOutputDir = sourceSets.test.output.classesDir
-    }
-
-    // Add standard javadoc repositories so we can reference classes in them using @link
-    tasks.javadoc.options.links "http://typesafehub.github.io/config/latest/api/",
-                                "https://docs.oracle.com/javase/7/docs/api/",
-                                "http://docs.guava-libraries.googlecode.com/git-history/v15.0/javadoc/",
-                                "http://hadoop.apache.org/docs/r${rootProject.ext.hadoopVersion}/api/",
-                                "https://hive.apache.org/javadocs/r${rootProject.ext.hiveVersion}/api/",
-                                "http://avro.apache.org/docs/${avroVersion}/api/java/",
-                                "https://dropwizard.github.io/metrics/${dropwizardMetricsVersion}/apidocs/"
-    rootProject.ext.javadocPackages.each {
-      tasks.javadoc.options.linksOffline "http://linkedin.github.io/gobblin/javadoc/${javadocVersion}/${it}/",
-                                       "${rootProject.buildDir}/${it}/docs/javadoc/"
-    }
-
-    afterEvaluate {
-      // add the standard pegasus dependencies wherever the plugin is used
-      if (project.plugins.hasPlugin('pegasus')) {
-        dependencies {
-          dataTemplateCompile externalDependency.pegasus.data
-          restClientCompile externalDependency.pegasus.restliClient,externalDependency.pegasus.restliCommon,externalDependency.pegasus.restliTools
-        }
-      }
-    }
-  }
-}
-
-//Turn off javadoc lint for Java 8+
-if (JavaVersion.current().isJava8Compatible()) {
-  allprojects {
-    tasks.withType(Javadoc) {
-      options.addStringOption('Xdoclint:none', '-quiet')
-    }
-  }
-}
-
-task dotProjectDependencies(description: 'List of gobblin project dependencies in dot format') << {
-  println "// ========= Start of project dependency graph ======= "
-  println "digraph project_dependencies {"
-  subprojects.each { Project project ->
-    def project_node_name = project.name.replaceAll("-","_")
-    if (project.configurations.findByName("compile") != null) {
-      project.configurations.compile.dependencies.each { Dependency dep ->
-        if (dep instanceof ProjectDependency) {
-          def dep_node_name = dep.dependencyProject.name.replaceAll("-","_")
-          println "\t${project_node_name} -> ${dep_node_name};"
-        }
-      }
-    }
-  }
-  println "}"
-  println "// ========= End of project dependency graph ======= "
-}
-
diff --git a/conf/gobblin-mapreduce.properties b/conf/gobblin-mapreduce.properties
deleted file mode 100644
index 7636dfb..0000000
--- a/conf/gobblin-mapreduce.properties
+++ /dev/null
@@ -1,43 +0,0 @@
-###############################################################################
-###################### Gobblin MapReduce configurations #######################
-###############################################################################
-
-# Thread pool settings for the task executor
-taskexecutor.threadpool.size=2
-taskretry.threadpool.coresize=1
-taskretry.threadpool.maxsize=2
-
-# File system URIs
-fs.uri=hdfs://localhost:8020
-writer.fs.uri=${fs.uri}
-state.store.fs.uri=${fs.uri}
-
-# Writer related configuration properties
-writer.destination.type=HDFS
-writer.output.format=AVRO
-writer.staging.dir=${env:GOBBLIN_WORK_DIR}/task-staging
-writer.output.dir=${env:GOBBLIN_WORK_DIR}/task-output
-
-# Data publisher related configuration properties
-data.publisher.type=gobblin.publisher.BaseDataPublisher
-data.publisher.final.dir=${env:GOBBLIN_WORK_DIR}/job-output
-data.publisher.replace.final.dir=false
-
-# Directory where job/task state files are stored
-state.store.dir=${env:GOBBLIN_WORK_DIR}/state-store
-
-# Directory where error files from the quality checkers are stored
-qualitychecker.row.err.file=${env:GOBBLIN_WORK_DIR}/err
-
-# Directory where job locks are stored
-job.lock.dir=${env:GOBBLIN_WORK_DIR}/locks
-
-# Directory where metrics log files are stored
-metrics.log.dir=${env:GOBBLIN_WORK_DIR}/metrics
-
-# Interval of task state reporting in milliseconds
-task.status.reportintervalinms=5000
-
-# MapReduce properties
-mr.job.root.dir=${env:GOBBLIN_WORK_DIR}/working
-
diff --git a/conf/gobblin-standalone.properties b/conf/gobblin-standalone.properties
deleted file mode 100644
index 9a31d5f..0000000
--- a/conf/gobblin-standalone.properties
+++ /dev/null
@@ -1,46 +0,0 @@
-###############################################################################
-###################### Gobblin standalone configurations ######################
-###############################################################################
-
-# Thread pool settings for the task executor
-taskexecutor.threadpool.size=2
-taskretry.threadpool.coresize=1
-taskretry.threadpool.maxsize=2
-
-# File system URIs
-fs.uri=file:///
-writer.fs.uri=${fs.uri}
-state.store.fs.uri=${fs.uri}
-
-# Writer related configuration properties
-writer.destination.type=HDFS
-writer.output.format=AVRO
-writer.staging.dir=${env:GOBBLIN_WORK_DIR}/task-staging
-writer.output.dir=${env:GOBBLIN_WORK_DIR}/task-output
-
-# Data publisher related configuration properties
-data.publisher.type=gobblin.publisher.BaseDataPublisher
-data.publisher.final.dir=${env:GOBBLIN_WORK_DIR}/job-output
-data.publisher.replace.final.dir=false
-
-# Directory where job configuration files are stored
-jobconf.dir=${env:GOBBLIN_JOB_CONFIG_DIR}
-
-# Directory where job/task state files are stored
-state.store.dir=${env:GOBBLIN_WORK_DIR}/state-store
-
-# Directory where commit sequences are stored
-gobblin.runtime.commit.sequence.store.dir=${env:GOBBLIN_WORK_DIR}/commit-sequence-store
-
-# Directory where error files from the quality checkers are stored
-qualitychecker.row.err.file=${env:GOBBLIN_WORK_DIR}/err
-
-# Directory where job locks are stored
-job.lock.dir=${env:GOBBLIN_WORK_DIR}/locks
-
-# Directory where metrics log files are stored
-metrics.log.dir=${env:GOBBLIN_WORK_DIR}/metrics
-
-# Interval of task state reporting in milliseconds
-task.status.reportintervalinms=5000
-
diff --git a/conf/log4j-compaction.xml b/conf/log4j-compaction.xml
deleted file mode 100644
index 35c77ee..0000000
--- a/conf/log4j-compaction.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
-
-<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
-  <appender name="console" class="org.apache.log4j.ConsoleAppender">
-    <param name="Target" value="System.out" />
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern"
-        value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n" />
-    </layout>
-  </appender>
-
-  <appender name="file" class="org.apache.log4j.RollingFileAppender">
-    <param name="append" value="false" />
-    <param name="file" value="logs/gobblin-compaction.log" />
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern"
-        value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n" />
-    </layout>
-  </appender>
-
-  <logger name="gobblin.compaction" additivity="false">
-    <level value="info" />
-    <appender-ref ref="console" />
-  </logger>
-
-  <root>
-    <level value="error" />
-    <appender-ref ref="file" />
-  </root>
-
-</log4j:configuration>
\ No newline at end of file
diff --git a/conf/log4j-mapreduce.xml b/conf/log4j-mapreduce.xml
deleted file mode 100644
index eb1b1fe..0000000
--- a/conf/log4j-mapreduce.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
-
-<log4j:configuration>
-
-  <appender name="FileRoll" class="org.apache.log4j.rolling.RollingFileAppender">
-    <param name="file" value="${gobblin.logs.dir}/gobblin-current.log" />
-    <param name="append" value="true" />
-    <param name="encoding" value="UTF-8" />
-
-    <rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
-      <param name="FileNamePattern" value="${gobblin.logs.dir}/archive/gobblin.%d{yyyy-MM-dd}.log"/>
-    </rollingPolicy>
-    
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n"/>
-    </layout>
-  </appender>
-  
-  <appender name="Console" class="org.apache.log4j.ConsoleAppender">
-    <param name="encoding" value="UTF-8" />
-
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n"/>
-    </layout>
-  </appender>
-
-  <logger name="gobblin.runtime" additivity="false">
-    <level value="INFO"/>
-    <appender-ref ref="FileRoll" />
-  </logger>
-  
-  <logger name="gobblin.runtime.mapreduce.CliMRJobLauncher" additivity="false">
-    <level value="ERROR"/>
-    <appender-ref ref="Console" />
-  </logger>
-
-  <root>
-    <priority value ="INFO" /> 
-    <appender-ref ref="FileRoll" />
-  </root>
-  
-</log4j:configuration>
diff --git a/conf/log4j-standalone.xml b/conf/log4j-standalone.xml
deleted file mode 100644
index 4dd58b4..0000000
--- a/conf/log4j-standalone.xml
+++ /dev/null
@@ -1,33 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
-
-<log4j:configuration>
-
-  <appender name="FileRoll" class="org.apache.log4j.rolling.RollingFileAppender">
-    <param name="file" value="${gobblin.logs.dir}/gobblin-current.log" />
-    <param name="append" value="true" />
-    <param name="encoding" value="UTF-8" />
-
-    <rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
-      <param name="FileNamePattern" value="${gobblin.logs.dir}/archive/gobblin.%d{yyyy-MM-dd}.log"/>
-    </rollingPolicy>
-    
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n"/>
-    </layout>
-  </appender>
-
-  <logger name="org.apache.commons.httpclient">
-  	<level value="DEBUG"/>
-  </logger>
-
-  <logger name="httpclient.wire">
-  	<level value="ERROR"/>
-  </logger>
-
-  <root>
-    <priority value ="INFO" /> 
-    <appender-ref ref="FileRoll" />
-  </root>
-  
-</log4j:configuration>
diff --git a/conf/log4j.properties b/conf/log4j.properties
deleted file mode 100644
index 1da8230..0000000
--- a/conf/log4j.properties
+++ /dev/null
@@ -1,8 +0,0 @@
-# Root logger option
-log4j.rootLogger=INFO, stdout
-
-# Direct log messages to stdout
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.Target=System.out
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
diff --git a/conf/log4j.xml b/conf/log4j.xml
deleted file mode 100644
index b713d49..0000000
--- a/conf/log4j.xml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
-
-<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
-  <appender name="console" class="org.apache.log4j.ConsoleAppender">
-    <param name="Target" value="System.out" />
-    <layout class="org.apache.log4j.PatternLayout">
-      <param name="ConversionPattern"
-        value="%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} %L - %m%n" />
-    </layout>
-  </appender>
-
-  <root>
-    <level value="info" />
-    <appender-ref ref="console" />
-  </root>
-
-</log4j:configuration>
\ No newline at end of file
diff --git a/conf/quartz.properties b/conf/quartz.properties
deleted file mode 100644
index 71b2f25..0000000
--- a/conf/quartz.properties
+++ /dev/null
@@ -1,3 +0,0 @@
-org.quartz.scheduler.instanceName = LocalJobScheduler
-org.quartz.threadPool.threadCount = 3
-org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
diff --git a/conf/yarn/application.conf b/conf/yarn/application.conf
deleted file mode 100644
index 2bbdf88..0000000
--- a/conf/yarn/application.conf
+++ /dev/null
@@ -1,51 +0,0 @@
-# Sample configuration properties for the Gobblin Yarn application launcher
-
-# Yarn/Helix configuration properties
-gobblin.yarn.helix.cluster.name=GobblinYarn
-gobblin.yarn.app.name=GobblinYarn
-gobblin.yarn.app.master.memory.mbs=256
-gobblin.yarn.initial.containers=2
-gobblin.yarn.container.memory.mbs=512
-gobblin.yarn.conf.dir=<directory where Gobblin on Yarn related configuration files are located>
-gobblin.yarn.lib.jars.dir=<directory where Gobblin on Yarn lib jars are located>
-gobblin.yarn.app.master.files.local=${gobblin.yarn.conf.dir}"/log4j-yarn.properties,"${gobblin.yarn.conf.dir}"/application.conf,"${gobblin.yarn.conf.dir}"/reference.conf"
-gobblin.yarn.container.files.local=${gobblin.yarn.app.master.files.local}
-gobblin.yarn.job.conf.path=<path where Gobblin job configuration file are located>
-gobblin.yarn.logs.sink.root.dir=<root sink directory for aggregated application/container logs stored on the launcher side>
-
-# File system URIs
-writer.fs.uri=${fs.uri}
-state.store.fs.uri=${fs.uri}
-
-# Writer related configuration properties
-writer.destination.type=HDFS
-writer.output.format=AVRO
-writer.staging.dir=${gobblin.yarn.work.dir}/task-staging
-writer.output.dir=${gobblin.yarn.work.dir}/task-output
-
-# Data publisher related configuration properties
-data.publisher.type=gobblin.publisher.BaseDataPublisher
-data.publisher.final.dir=${gobblin.yarn.work.dir}/job-output
-data.publisher.replace.final.dir=false
-
-# Directory where job/task state files are stored
-state.store.dir=${gobblin.yarn.work.dir}/state-store
-
-# Directory where error files from the quality checkers are stored
-qualitychecker.row.err.file=${gobblin.yarn.work.dir}/err
-
-# Disable job locking for now
-job.lock.enabled=false
-
-# Directory where job locks are stored
-job.lock.dir=${gobblin.yarn.work.dir}/locks
-
-# Directory where metrics log files are stored
-metrics.log.dir=${gobblin.yarn.work.dir}/metrics
-
-# Interval of task state reporting in milliseconds
-task.status.reportintervalinms=1000
-
-# Enable metrics / events
-metrics.enabled=true
-
diff --git a/conf/yarn/log4j-yarn.properties b/conf/yarn/log4j-yarn.properties
deleted file mode 100755
index a7ffb68..0000000
--- a/conf/yarn/log4j-yarn.properties
+++ /dev/null
@@ -1,27 +0,0 @@
-#   Licensed under the Apache License, Version 2.0 (the "License");
-#   you may not use this file except in compliance with the License.
-#   You may obtain a copy of the License at
-#
-#       http://www.apache.org/licenses/LICENSE-2.0
-#
-#   Unless required by applicable law or agreed to in writing, software
-#   distributed under the License is distributed on an "AS IS" BASIS,
-#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#   See the License for the specific language governing permissions and
-#   limitations under the License.
-
-# log4j configuration used during build and unit tests
-
-log4j.rootLogger=info,stdout
-log4j.threshhold=ALL
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss z} %-5p [%t] %C %X{tableName} - %m%n
-
-# Suppressed loggers
-log4j.logger.org.apache.helix.controller.GenericHelixController=ERROR
-log4j.logger.org.apache.helix.controller.stages=ERROR
-log4j.logger.org.apache.helix.controller.strategy.AutoRebalanceStrategy=ERROR
-log4j.logger.org.apache.helix.manager.zk=ERROR
-log4j.logger.org.apache.helix.monitoring.mbeans.ClusterStatusMonitor=ERROR
-log4j.logger.org.apache.helix.store.zk.AutoFallbackPropertyStore=ERROR
\ No newline at end of file
diff --git a/conf/yarn/quartz.properties b/conf/yarn/quartz.properties
deleted file mode 120000
index da8e22c..0000000
--- a/conf/yarn/quartz.properties
+++ /dev/null
@@ -1 +0,0 @@
-../quartz.properties
\ No newline at end of file
diff --git a/conf/yarn/reference.conf b/conf/yarn/reference.conf
deleted file mode 100644
index 9c07238..0000000
--- a/conf/yarn/reference.conf
+++ /dev/null
@@ -1,24 +0,0 @@
-# Sample configuration properties with default values
-
-# Yarn/Helix configuration properties
-gobblin.yarn.app.queue=default
-gobblin.yarn.helix.cluster.name=GobblinYarn
-gobblin.yarn.app.name=GobblinYarn
-gobblin.yarn.app.master.memory.mbs=512
-gobblin.yarn.app.master.cores=1
-gobblin.yarn.app.report.interval.minutes=5
-gobblin.yarn.max.get.app.report.failures=4
-gobblin.yarn.email.notification.on.shutdown=false
-gobblin.yarn.initial.containers=1
-gobblin.yarn.container.memory.mbs=512
-gobblin.yarn.container.cores=1
-gobblin.yarn.container.affinity.enabled=true
-gobblin.yarn.helix.instance.max.retries=2
-gobblin.yarn.keytab.login.interval.minutes=1440
-gobblin.yarn.token.renew.interval.minutes=720
-gobblin.yarn.work.dir=/gobblin
-gobblin.yarn.zk.connection.string="localhost:2181"
-
-fs.uri="hdfs://localhost:9000"
-
-job.execinfo.server.enabled=false
\ No newline at end of file
diff --git a/defaultEnvironment.gradle b/defaultEnvironment.gradle
deleted file mode 100644
index 0e4f7d7..0000000
--- a/defaultEnvironment.gradle
+++ /dev/null
@@ -1,23 +0,0 @@
-// Copyright (C) 2014-2015 LinkedIn Corp. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-// this file except in compliance with the License. You may obtain a copy of the
-// License at  http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software distributed
-// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-// CONDITIONS OF ANY KIND, either express or implied.
-
-subprojects {
-  repositories {
-    mavenCentral()
-    maven {
-      url "https://repository.cloudera.com/artifactory/cloudera-repos/"
-    }
-    maven {
-      url "http://conjars.org/repo"
-    }
-  }
-
-  project.buildDir = new File(project.rootProject.buildDir, project.name)
-}
diff --git a/files/codestyle-eclipse.xml b/files/codestyle-eclipse.xml
new file mode 100644
index 0000000..f048fc3
--- /dev/null
+++ b/files/codestyle-eclipse.xml
@@ -0,0 +1,291 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<profiles version="12">
+<profile kind="CodeFormatterProfile" name="LinkedIn Style" version="12">
+<setting id="org.eclipse.jdt.core.formatter.comment.insert_new_line_before_root_tags" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.disabling_tag" value="@formatter:off"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_annotation" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_arguments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_anonymous_type_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_case" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_brace_in_array_initializer" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.new_lines_at_block_boundaries" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_annotation_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_before_closing_brace_in_array_initializer" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_annotation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_field" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_while" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.use_on_off_tags" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_annotation_type_member_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_before_else_in_if_statement" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_prefix_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.keep_else_statement_on_same_line" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_ellipsis" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.insert_new_line_for_parameter" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_annotation_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_breaks_compare_to_cases" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_multiple_fields" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_expressions_in_array_initializer" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_conditional_expression" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_for" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_binary_operator" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_question_in_wildcard" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_array_initializer" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_enum_constant" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_before_finally_in_try_statement" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_local_variable" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_before_catch_in_try_statement" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_while" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_after_package" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_type_parameters" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.continuation_indentation" value="2"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_postfix_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_method_invocation" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_superinterfaces" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_new_chunk" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_binary_operator" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_package" value="0"/>
+<setting id="org.eclipse.jdt.core.compiler.source" value="1.6"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_constant_arguments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_constructor_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_line_comments" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_arguments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_enum_declarations" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.join_wrapped_lines" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_block" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_explicit_constructor_call" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_invocation_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_member_type" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.align_type_members_on_columns" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_enum_constant" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_for" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_method_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_selector_in_method_invocation" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_switch" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_unary_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_colon_in_case" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.indent_parameter_description" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_switch" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_block_comment" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.lineSplit" value="120"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_if" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_brackets_in_array_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_parenthesized_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_explicitconstructorcall_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_constructor_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_first_class_body_declaration" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_method" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indentation.size" value="2"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.enabling_tag" value="@formatter:on"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_enum_constant" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_superclass_in_type_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_assignment" value="16"/>
+<setting id="org.eclipse.jdt.core.compiler.problem.assertIdentifier" value="error"/>
+<setting id="org.eclipse.jdt.core.formatter.tabulation.char" value="space"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_parameters" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_try_resources" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_prefix_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_statements_compare_to_body" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_method" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.wrap_outer_expressions_when_nested" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.format_guardian_clause_on_one_line" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_for" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_cast" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_parameters_in_constructor_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_colon_in_labeled_statement" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_annotation_type_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_method_body" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_method_declaration" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_method_invocation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_try" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_allocation_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_enum_constant" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_annotation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_at_in_annotation_type_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_throws" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_if" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_switch" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_throws" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_return" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_question_in_conditional" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_question_in_wildcard" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_try" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_allocation_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.preserve_white_space_between_code_and_line_comments" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_parenthesized_expression_in_throw" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_type_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.compiler.problem.enumIdentifier" value="error"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_switch" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_ellipsis" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_block" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_inits" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_method_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.compact_else_if" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.wrap_before_or_operator_multicatch" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_array_initializer" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_increments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.format_line_comment_starting_on_first_column" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_bracket_in_array_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_field" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_enum_constant" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.indent_root_tags" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_declarations" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_union_type_in_multicatch" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_explicitconstructorcall_arguments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_switch" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_method_declaration_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_superinterfaces" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_allocation_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.tabulation.size" value="2"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_opening_brace_in_array_initializer" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_closing_brace_in_block" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_constant" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_constructor_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_throws" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_if" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.clear_blank_lines_in_javadoc_comment" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_constructor_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_assignment_operator" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_assignment_operator" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_empty_lines" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_synchronized" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_closing_paren_in_cast" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_declaration_parameters" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_block_in_case" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.number_of_empty_lines_to_preserve" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_catch" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_constructor_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_method_invocation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_bracket_in_array_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_qualified_allocation_expression" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_and_in_type_parameter" value="insert"/>
+<setting id="org.eclipse.jdt.core.compiler.compliance" value="1.6"/>
+<setting id="org.eclipse.jdt.core.formatter.continuation_indentation_for_array_initializer" value="2"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_brackets_in_array_allocation_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_at_in_annotation_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_allocation_expression" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_cast" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_unary_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_parameterized_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_anonymous_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.keep_empty_array_initializer_on_one_line" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_enum_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.keep_imple_if_on_one_line" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_constructor_declaration_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_closing_angle_bracket_in_type_parameters" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_at_end_of_file_if_missing" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_colon_in_for" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_labeled_statement" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_parameterized_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_type_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_binary_expression" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_enum_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_type" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_while" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode" value="enabled"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_try" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.put_empty_statement_on_new_line" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_label" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_parameter" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_type_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_method_invocation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_before_while_in_do_statement" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_enum_constant" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_javadoc_comments" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.line_length" value="120"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_after_annotation_on_package" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_between_import_groups" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_enum_constant_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_semicolon" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_constructor_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.number_of_blank_lines_at_beginning_of_method_body" value="0"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_conditional" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_type_header" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_annotation_type_member_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.wrap_before_binary_operator" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_declaration_header" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_between_type_declarations" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_synchronized" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_statements_compare_to_block" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_superinterfaces_in_enum_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.join_lines_in_comments" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_question_in_conditional" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_field_declarations" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_compact_if" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_for_inits" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_switchstatements_compare_to_cases" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_array_initializer" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_default" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_and_in_type_parameter" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_parens_in_constructor_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_before_imports" value="1"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_colon_in_assert" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_html" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_throws_clause_in_method_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_angle_bracket_in_type_parameters" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_bracket_in_array_allocation_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_anonymous_type_declaration" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_colon_in_conditional" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_angle_bracket_in_parameterized_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_for" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_postfix_operator" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_source_code" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_synchronized" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_allocation_expression" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_constructor_declaration_throws" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_parameters_in_method_declaration" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_brace_in_array_initializer" value="insert"/>
+<setting id="org.eclipse.jdt.core.compiler.codegen.targetPlatform" value="1.6"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_resources_in_try" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.use_tabs_only_for_leading_indentations" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_arguments_in_annotation" value="16"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_header" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.format_block_comments" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_enum_constant" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.alignment_for_enum_constants" value="49"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_new_line_in_empty_block" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_annotation_declaration_header" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_parenthesized_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_parenthesized_expression" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_catch" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_multiple_local_declarations" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_switch" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_comma_in_for_increments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_invocation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_colon_in_assert" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.brace_position_for_type_declaration" value="end_of_line"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_brace_in_array_initializer" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_between_empty_braces_in_array_initializer" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_opening_paren_in_method_declaration" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_for" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_paren_in_catch" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_parameterized_type_reference" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_field_declarations" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_closing_paren_in_annotation" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_parameterized_type_reference" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_method_invocation_arguments" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.comment.new_lines_at_javadoc_boundaries" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.blank_lines_after_imports" value="2"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_comma_in_multiple_local_declarations" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.indent_body_declarations_compare_to_enum_constant_header" value="true"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_after_semicolon_in_for" value="insert"/>
+<setting id="org.eclipse.jdt.core.formatter.never_indent_line_comments_on_first_column" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_semicolon_in_try_resources" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.insert_space_before_opening_angle_bracket_in_type_arguments" value="do not insert"/>
+<setting id="org.eclipse.jdt.core.formatter.never_indent_block_comments_on_first_column" value="false"/>
+<setting id="org.eclipse.jdt.core.formatter.keep_then_statement_on_same_line" value="false"/>
+</profile>
+</profiles>
diff --git a/files/codestyle-intellij.xml b/files/codestyle-intellij.xml
new file mode 100644
index 0000000..4ca2fed
--- /dev/null
+++ b/files/codestyle-intellij.xml
@@ -0,0 +1,485 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<code_scheme name="LinkedIn Style">
+  <option name="JAVA_INDENT_OPTIONS">
+    <value>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+      <option name="USE_TAB_CHARACTER" value="false" />
+      <option name="SMART_TABS" value="false" />
+      <option name="LABEL_INDENT_SIZE" value="0" />
+      <option name="LABEL_INDENT_ABSOLUTE" value="false" />
+      <option name="USE_RELATIVE_INDENTS" value="false" />
+    </value>
+  </option>
+  <option name="OTHER_INDENT_OPTIONS">
+    <value>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+      <option name="USE_TAB_CHARACTER" value="false" />
+      <option name="SMART_TABS" value="false" />
+      <option name="LABEL_INDENT_SIZE" value="0" />
+      <option name="LABEL_INDENT_ABSOLUTE" value="false" />
+      <option name="USE_RELATIVE_INDENTS" value="false" />
+    </value>
+  </option>
+  <option name="FIELD_NAME_PREFIX" value="_" />
+  <option name="CLASS_COUNT_TO_USE_IMPORT_ON_DEMAND" value="1000" />
+  <option name="NAMES_COUNT_TO_USE_IMPORT_ON_DEMAND" value="5" />
+  <option name="IMPORT_LAYOUT_TABLE">
+    <value>
+      <package name="" withSubpackages="true" static="false" />
+      <emptyLine />
+      <package name="" withSubpackages="true" static="true" />
+    </value>
+  </option>
+  <option name="ENABLE_JAVADOC_FORMATTING" value="false" />
+  <option name="JD_ADD_BLANK_AFTER_PARM_COMMENTS" value="true" />
+  <option name="JD_ADD_BLANK_AFTER_RETURN" value="true" />
+  <option name="JD_KEEP_INVALID_TAGS" value="false" />
+  <option name="KEEP_LINE_BREAKS" value="false" />
+  <option name="KEEP_BLANK_LINES_IN_DECLARATIONS" value="1" />
+  <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+  <option name="KEEP_BLANK_LINES_BEFORE_RBRACE" value="0" />
+  <option name="BLANK_LINES_AFTER_PACKAGE" value="2" />
+  <option name="BLANK_LINES_AFTER_IMPORTS" value="2" />
+  <option name="BRACE_STYLE" value="2" />
+  <option name="CLASS_BRACE_STYLE" value="2" />
+  <option name="METHOD_BRACE_STYLE" value="2" />
+  <option name="ELSE_ON_NEW_LINE" value="true" />
+  <option name="WHILE_ON_NEW_LINE" value="true" />
+  <option name="CATCH_ON_NEW_LINE" value="true" />
+  <option name="FINALLY_ON_NEW_LINE" value="true" />
+  <option name="ALIGN_MULTILINE_PARAMETERS_IN_CALLS" value="true" />
+  <option name="ALIGN_MULTILINE_THROWS_LIST" value="true" />
+  <option name="ALIGN_MULTILINE_EXTENDS_LIST" value="true" />
+  <option name="CALL_PARAMETERS_WRAP" value="5" />
+  <option name="METHOD_PARAMETERS_WRAP" value="5" />
+  <option name="THROWS_LIST_WRAP" value="1" />
+  <option name="THROWS_KEYWORD_WRAP" value="2" />
+  <option name="WRAP_COMMENTS" value="true" />
+  <XML>
+    <option name="XML_LEGACY_SETTINGS_IMPORTED" value="true" />
+  </XML>
+  <ADDITIONAL_INDENT_OPTIONS fileType="scala">
+    <option name="INDENT_SIZE" value="2" />
+    <option name="TAB_SIZE" value="2" />
+  </ADDITIONAL_INDENT_OPTIONS>
+  <ADDITIONAL_INDENT_OPTIONS fileType="txt">
+    <option name="INDENT_SIZE" value="2" />
+  </ADDITIONAL_INDENT_OPTIONS>
+  <codeStyleSettings language="CFML">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="BRACE_STYLE" value="2" />
+    <option name="ELSE_ON_NEW_LINE" value="true" />
+    <option name="WHILE_ON_NEW_LINE" value="true" />
+    <option name="CATCH_ON_NEW_LINE" value="true" />
+    <option name="ALIGN_MULTILINE_PARAMETERS_IN_CALLS" value="true" />
+    <option name="CALL_PARAMETERS_WRAP" value="5" />
+    <option name="METHOD_PARAMETERS_WRAP" value="5" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+  </codeStyleSettings>
+  <codeStyleSettings language="CSS">
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="CoffeeScript">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="ALIGN_MULTILINE_PARAMETERS_IN_CALLS" value="true" />
+    <option name="METHOD_PARAMETERS_WRAP" value="1" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+    <indentOptions>
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="ECMA Script Level 4">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="BLANK_LINES_AFTER_PACKAGE" value="2" />
+    <option name="BLANK_LINES_AFTER_IMPORTS" value="2" />
+    <option name="BRACE_STYLE" value="2" />
+    <option name="CLASS_BRACE_STYLE" value="2" />
+    <option name="METHOD_BRACE_STYLE" value="2" />
+    <option name="ELSE_ON_NEW_LINE" value="true" />
+    <option name="WHILE_ON_NEW_LINE" value="true" />
+    <option name="CATCH_ON_NEW_LINE" value="true" />
+    <option name="FINALLY_ON_NEW_LINE" value="true" />
+    <option name="ALIGN_MULTILINE_PARAMETERS_IN_CALLS" value="true" />
+    <option name="ALIGN_MULTILINE_EXTENDS_LIST" value="true" />
+    <option name="CALL_PARAMETERS_WRAP" value="5" />
+    <option name="METHOD_PARAMETERS_WRAP" value="5" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+  </codeStyleSettings>
+  <codeStyleSettings language="GSP">
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="Groovy">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_DECLARATIONS" value="1" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="KEEP_BLANK_LINES_BEFORE_RBRACE" value="0" />
+    <option name="BLANK_LINES_AFTER_PACKAGE" value="2" />
+    <option name="BLANK_LINES_AFTER_IMPORTS" value="2" />
+    <option name="ALIGN_MULTILINE_PARAMETERS" value="false" />
+    <option name="CALL_PARAMETERS_WRAP" value="1" />
+    <option name="METHOD_PARAMETERS_WRAP" value="1" />
+    <option name="EXTENDS_LIST_WRAP" value="1" />
+    <option name="THROWS_LIST_WRAP" value="1" />
+    <option name="THROWS_KEYWORD_WRAP" value="2" />
+    <option name="METHOD_CALL_CHAIN_WRAP" value="1" />
+    <option name="BINARY_OPERATION_WRAP" value="1" />
+    <option name="TERNARY_OPERATION_WRAP" value="1" />
+    <option name="KEEP_SIMPLE_METHODS_IN_ONE_LINE" value="false" />
+    <option name="KEEP_SIMPLE_CLASSES_IN_ONE_LINE" value="false" />
+    <option name="FOR_STATEMENT_WRAP" value="1" />
+    <option name="IF_BRACE_FORCE" value="3" />
+    <option name="WHILE_BRACE_FORCE" value="3" />
+    <option name="FOR_BRACE_FORCE" value="3" />
+    <option name="ENUM_CONSTANTS_WRAP" value="5" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="HTML">
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="JAVA">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_DECLARATIONS" value="1" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="KEEP_BLANK_LINES_BEFORE_RBRACE" value="0" />
+    <option name="BLANK_LINES_AFTER_IMPORTS" value="2" />
+    <option name="ALIGN_MULTILINE_PARAMETERS" value="false" />
+    <option name="ALIGN_MULTILINE_RESOURCES" value="false" />
+    <option name="ALIGN_MULTILINE_FOR" value="false" />
+    <option name="ALIGN_MULTILINE_THROWS_LIST" value="true" />
+    <option name="ALIGN_MULTILINE_EXTENDS_LIST" value="true" />
+    <option name="CALL_PARAMETERS_WRAP" value="1" />
+    <option name="METHOD_PARAMETERS_WRAP" value="1" />
+    <option name="RESOURCE_LIST_WRAP" value="1" />
+    <option name="THROWS_LIST_WRAP" value="1" />
+    <option name="THROWS_KEYWORD_WRAP" value="2" />
+    <option name="METHOD_CALL_CHAIN_WRAP" value="1" />
+    <option name="BINARY_OPERATION_WRAP" value="1" />
+    <option name="BINARY_OPERATION_SIGN_ON_NEXT_LINE" value="true" />
+    <option name="TERNARY_OPERATION_WRAP" value="1" />
+    <option name="TERNARY_OPERATION_SIGNS_ON_NEXT_LINE" value="true" />
+    <option name="FOR_STATEMENT_WRAP" value="1" />
+    <option name="ASSIGNMENT_WRAP" value="1" />
+    <option name="WRAP_COMMENTS" value="true" />
+    <option name="IF_BRACE_FORCE" value="3" />
+    <option name="DOWHILE_BRACE_FORCE" value="3" />
+    <option name="WHILE_BRACE_FORCE" value="3" />
+    <option name="FOR_BRACE_FORCE" value="3" />
+    <option name="VARIABLE_ANNOTATION_WRAP" value="2" />
+    <option name="ENUM_CONSTANTS_WRAP" value="5" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+    <arrangement>
+      <groups>
+        <group>
+          <type>GETTERS_AND_SETTERS</type>
+          <order>KEEP</order>
+        </group>
+      </groups>
+      <rules>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PUBLIC />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PROTECTED />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PACKAGE_PRIVATE />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PRIVATE />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PUBLIC />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PROTECTED />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PACKAGE_PRIVATE />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PRIVATE />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PUBLIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PROTECTED />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PACKAGE_PRIVATE />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <FINAL />
+              <PRIVATE />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PUBLIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PROTECTED />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PACKAGE_PRIVATE />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <FIELD />
+              <PRIVATE />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <FIELD />
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <CONSTRUCTOR />
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <METHOD />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <METHOD />
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <ENUM />
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <INTERFACE />
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <AND>
+              <CLASS />
+              <STATIC />
+            </AND>
+          </match>
+        </rule>
+        <rule>
+          <match>
+            <CLASS />
+          </match>
+        </rule>
+      </rules>
+    </arrangement>
+  </codeStyleSettings>
+  <codeStyleSettings language="JSP">
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="JavaScript">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="ALIGN_MULTILINE_PARAMETERS" value="false" />
+    <option name="ALIGN_MULTILINE_FOR" value="false" />
+    <option name="CALL_PARAMETERS_WRAP" value="1" />
+    <option name="METHOD_PARAMETERS_WRAP" value="1" />
+    <option name="BINARY_OPERATION_WRAP" value="1" />
+    <option name="BINARY_OPERATION_SIGN_ON_NEXT_LINE" value="true" />
+    <option name="TERNARY_OPERATION_WRAP" value="1" />
+    <option name="TERNARY_OPERATION_SIGNS_ON_NEXT_LINE" value="true" />
+    <option name="FOR_STATEMENT_WRAP" value="1" />
+    <option name="ARRAY_INITIALIZER_WRAP" value="1" />
+    <option name="IF_BRACE_FORCE" value="3" />
+    <option name="DOWHILE_BRACE_FORCE" value="3" />
+    <option name="WHILE_BRACE_FORCE" value="3" />
+    <option name="FOR_BRACE_FORCE" value="3" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="LESS">
+    <indentOptions>
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="SASS">
+    <indentOptions>
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="SCSS">
+    <indentOptions>
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+  </codeStyleSettings>
+  <codeStyleSettings language="SQL">
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+  </codeStyleSettings>
+  <codeStyleSettings language="TypeScript">
+    <option name="KEEP_LINE_BREAKS" value="false" />
+    <option name="KEEP_BLANK_LINES_IN_CODE" value="1" />
+    <option name="BRACE_STYLE" value="2" />
+    <option name="CLASS_BRACE_STYLE" value="2" />
+    <option name="METHOD_BRACE_STYLE" value="2" />
+    <option name="ELSE_ON_NEW_LINE" value="true" />
+    <option name="WHILE_ON_NEW_LINE" value="true" />
+    <option name="CATCH_ON_NEW_LINE" value="true" />
+    <option name="FINALLY_ON_NEW_LINE" value="true" />
+    <option name="ALIGN_MULTILINE_PARAMETERS_IN_CALLS" value="true" />
+    <option name="ALIGN_MULTILINE_EXTENDS_LIST" value="true" />
+    <option name="CALL_PARAMETERS_WRAP" value="5" />
+    <option name="METHOD_PARAMETERS_WRAP" value="5" />
+    <option name="PARENT_SETTINGS_INSTALLED" value="true" />
+  </codeStyleSettings>
+  <codeStyleSettings language="XML">
+    <indentOptions>
+      <option name="INDENT_SIZE" value="2" />
+      <option name="CONTINUATION_INDENT_SIZE" value="4" />
+      <option name="TAB_SIZE" value="2" />
+    </indentOptions>
+    <arrangement>
+      <rules>
+        <rule>
+          <match>
+            <NAME>xmlns:.*</NAME>
+          </match>
+        </rule>
+      </rules>
+    </arrangement>
+  </codeStyleSettings>
+</code_scheme>
+
diff --git a/files/gobblin_job_history_store_ddlwq.sql b/files/gobblin_job_history_store_ddlwq.sql
new file mode 100644
index 0000000..f8ddfa4
--- /dev/null
+++ b/files/gobblin_job_history_store_ddlwq.sql
@@ -0,0 +1,113 @@
+-- (c) 2014 LinkedIn Corp. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+-- this file except in compliance with the License. You may obtain a copy of the
+-- License at  http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software distributed
+-- under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+-- CONDITIONS OF ANY KIND, either express or implied.
+
+CREATE TABLE IF NOT EXISTS gobblin_job_executions (
+	job_name VARCHAR(128) NOT NULL,
+	job_id VARCHAR(128) NOT NULL,
+	start_time TIMESTAMP,
+	end_time TIMESTAMP,
+	duration BIGINT(21),
+	state ENUM('PENDING', 'RUNNING', 'SUCCESSFUL', 'COMMITTED', 'FAILED', 'CANCELLED'),
+	launched_tasks INT,
+	completed_tasks INT,
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (job_id),
+	INDEX (job_name),
+	INDEX (state)
+);
+
+CREATE TABLE IF NOT EXISTS gobblin_task_executions (
+	task_id VARCHAR(128) NOT NULL,
+	job_id VARCHAR(128) NOT NULL,
+	start_time TIMESTAMP,
+	end_time TIMESTAMP,
+	duration BIGINT(21),
+	state ENUM('PENDING', 'RUNNING', 'SUCCESSFUL', 'COMMITTED', 'FAILED', 'CANCELLED'),
+	low_watermark BIGINT(21),
+	high_watermark BIGINT(21),
+	table_namespace VARCHAR(128),
+	table_name VARCHAR(128),
+	table_type ENUM('SNAPSHOT_ONLY', 'SNAPSHOT_APPEND', 'APPEND_ONLY'),
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (task_id),
+	FOREIGN KEY (job_id) 
+	REFERENCES gobblin_job_executions(job_id) 
+	ON DELETE CASCADE,
+	INDEX (state),
+	INDEX (table_namespace),
+	INDEX (table_name),
+	INDEX (table_type)
+);
+
+CREATE TABLE IF NOT EXISTS gobblin_job_metrics (
+	metric_id BIGINT(21) NOT NULL AUTO_INCREMENT,
+	job_id VARCHAR(128) NOT NULL,
+	metric_group VARCHAR(128) NOT NULL,
+	metric_name VARCHAR(128) NOT NULL,
+	metric_type ENUM('COUNTER', 'METER', 'GAUGE') NOT NULL,
+	metric_value VARCHAR(256) NOT NULL,
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (metric_id),
+	FOREIGN KEY (job_id) 
+	REFERENCES gobblin_job_executions(job_id) 
+	ON DELETE CASCADE,
+	INDEX (metric_group),
+	INDEX (metric_name),
+	INDEX (metric_type)
+);
+
+CREATE TABLE IF NOT EXISTS gobblin_task_metrics (
+	metric_id BIGINT(21) NOT NULL AUTO_INCREMENT,
+	task_id VARCHAR(128) NOT NULL,
+	metric_group VARCHAR(128) NOT NULL,
+	metric_name VARCHAR(128) NOT NULL,
+	metric_type ENUM('COUNTER', 'METER', 'GAUGE') NOT NULL,
+	metric_value VARCHAR(256) NOT NULL,
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (metric_id),
+	FOREIGN KEY (task_id) 
+	REFERENCES gobblin_task_executions(task_id) 
+	ON DELETE CASCADE,
+	INDEX (metric_group),
+	INDEX (metric_name),
+	INDEX (metric_type)
+);
+
+CREATE TABLE IF NOT EXISTS gobblin_job_properties (
+    property_id BIGINT(21) NOT NULL AUTO_INCREMENT,
+    job_id VARCHAR(128) NOT NULL,
+    property_key VARCHAR(128) NOT NULL,
+    property_value VARCHAR(128) NOT NULL,
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (property_id),
+	FOREIGN KEY (job_id)
+    REFERENCES gobblin_job_executions(job_id)
+    ON DELETE CASCADE,
+    INDEX (property_key)
+);
+
+CREATE TABLE IF NOT EXISTS gobblin_task_properties (
+    property_id BIGINT(21) NOT NULL AUTO_INCREMENT,
+    task_id VARCHAR(128) NOT NULL,
+    property_key VARCHAR(128) NOT NULL,
+    property_value VARCHAR(128) NOT NULL,
+	created_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+	last_modified_ts TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
+	PRIMARY KEY (property_id),
+	FOREIGN KEY (task_id)
+    REFERENCES gobblin_task_executions(task_id)
+    ON DELETE CASCADE,
+    INDEX (property_key)
+);
diff --git a/gobblin-docs/developer-guide/files/prefs-eclipse.epf b/files/prefs-eclipse.epf
similarity index 100%
rename from gobblin-docs/developer-guide/files/prefs-eclipse.epf
rename to files/prefs-eclipse.epf
diff --git a/getting-started-backup.md b/getting-started-backup.md
new file mode 100644
index 0000000..427ec8d
--- /dev/null
+++ b/getting-started-backup.md
@@ -0,0 +1,338 @@
+* Author: Ziyang
+* Reviewer: Chavdar
+
+This page will guide you to set up Gobblin, and run a quick and simple first job.
+
+Download and Build
+
+
+# Terminology
+Term | Definition
+---- | ----------
+Source	| Represents an external data store we need to pull data from (e.g. Oracle, MySQL, Kafka, Salesforce, etc.)
+Extractor | Responsible for pulling a subset of the data from a Source
+Watermark | How a job keeps track of its state - keeps track of the offset up to which the job pulled data (e.g. offset, scn, timestamp)
+WorkUnit | A collection of key-value pairs required for a Task to execute
+WorkUnitState | A collection of key-value pairs that contains all pairs present in WorkUnit, as well as Task runtime key-values pairs (e.g. how many records got written)
+Task | This ties all the classes together and is executed in its own thread: it reads data from the extractor, passes it through a series of converters, and then passes it to the writer and data publisher
+Entity | A specific topic present in the Source (e.g. Oracle Table or Kafka Topic)
+Snapshot | Represents a full dump of an entity
+Pull file | Represents all the key-value pairs necessary to run a Gobblin job
+
+# What Can I Use Gobblin For
+Gobblin can be used to connect to an external data source and pull data on a periodic basis. The data can be transformed into a variety of output formats (e.g. Avro) and can be written to any storage system (e.g. HDFS). If you have a requirement to pull data from an external store then Gobblin can provide a pluggable, relient, and consistent way of pulling and publishing your data.
+
+# Where to Start
+Gobblin can be used to pull data from any external sources. Currently, Gobblin supports a few sources out of the box, and more and more are being added every day. It is common for a user to find that the infrastructure to pull data from their required source is already present. In this case the user can easily add new properties file in they want to pull in a new entity (e.g. table or topic). If Gobblin doesn't support your data source then you only need to add a few Java classes for to everything working.
+
+## Adding a New Data Source
+### Extending Base Classes
+Gobblin isolates its logic into a few plugin points. The most important of these plugins are the Source class and the Extractor class. The Source class distributes work among a series of Extractors, and each Extractor is responsible for connecting to the external data source and pulling the data into the framework. When pulling in from a new data source the user needs to implement these two interfaces. Once these two classes are implemented the framework can be used to pull data and write it out to a storage system.
+#### Source
+##### com.linkedin.uif.source.workunit.Extractor
+    public interface Source<S, D> {
+  
+      /**
+       * Get a list of {@link WorkUnit}s, each of which is for extracting a portion of the data.
+       *
+       * <p>
+       *   Each {@link WorkUnit} will be used instantiate a {@link WorkUnitState} that gets passed to the
+       *   {@link #getExtractor(WorkUnitState)} method to get an {@link Extractor} for extracting schema
+       *   and data records from the source. The {@link WorkUnit} instance should have all the properties
+       *   needed for the {@link Extractor} to work.
+       * </p>
+       *
+       * <p>
+       *   Typically the list of {@link WorkUnit}s for the current run is determined by taking into account
+       *   the list of {@link WorkUnit}s from the previous run so data gets extracted incrementally. The
+       *   method {@link SourceState#getPreviousWorkUnitStates} can be used to get the list of {@link WorkUnit}s
+       *   from the previous run.
+       * </p>
+       * 
+       * @param state see {@link SourceState}
+       * @return a list of {@link WorkUnit}s
+       */
+      public abstract List<WorkUnit> getWorkunits(SourceState state);
+
+      /**
+       * Get an {@link Extractor} based on a given {@link WorkUnitState}.
+       *
+       * <p>
+       *   The {@link Extractor} returned can use {@link WorkUnitState} to store arbitrary key-value pairs
+       *   that will be persisted to the state store and loaded in the next scheduled job run.
+       * </p>
+       * 
+       * @param state a {@link WorkUnitState} carrying properties needed by the returned {@link Extractor}
+       * @return an {@link Extractor} used to extract schema and data records from the data source
+       * @throws IOException if it fails to create an {@link Extractor}
+       */
+      public abstract Extractor<S, D> getExtractor(WorkUnitState state) throws IOException;
+  
+      /**
+       * Shutdown this {@link Source} instance.
+       *
+       * <p>
+       *   This method is called once when the job completes. Properties (key-value pairs) added to the input
+       *   {@link SourceState} instance will be persisted and available to the next scheduled job run through
+       *   the method {@link #getWorkunits(SourceState)}.  If there is no cleanup or reporting required for a
+       *   particular implementation of this interface, then it is acceptable to have a default implementation
+       *   of this method.
+       * </p>
+       * 
+       * @param state see {@link SourceState}
+       */
+      public abstract void shutdown(SourceState state);
+    }
+
+
+The Source class is responsible for splitting the work to be done among a series of WorkUnits. The function getWorkunits should construct a series of WorkUnits and assign a subset of the work to be done to each WorkUnit. The next function is getExtractor which will take in one of the WorkUnits defined in getWorkUnits and construct an Extractor object.
+
+#### Extractor
+##### com.linkedin.uif.source.workunit.Extractor
+    public interface Extractor<S, D> extends Closeable {
+      /**
+       * Get the schema (Metadata) of the extracted data records.
+       *
+       * @return schema of the extracted data records
+       */
+      public S getSchema();
+
+      /**
+       * Read a data record from the data source.
+       *
+       * <p>
+       *   This method allows data record object reuse through the one passed in if the
+       *   implementation class decides to do so.
+       * </p>
+       *
+       * @param reuse the data record object to be used
+       * @return a data record
+       * @throws DataRecordException if there is problem with the extracted data record
+       * @throws java.io.IOException if there is problem extract a data record from the source
+       */
+      public D readRecord(D reuse) throws DataRecordException, IOException;
+
+      /**
+       * Get the expected source record count.
+       *
+       * @return expected source record count
+       */
+      public long getExpectedRecordCount();
+
+      /**
+       * Get the calculated high watermark up to which data records are to be extracted.
+       * @return high watermark
+       */
+      public long getHighWatermark();
+    }
+
+The Extractor class is created from a WorkUnit and is responsible for connecting to the external data source, getting the schema for the data that will be pulled, and getting the data. The method readRecord will be called by Gobblin until it returns null, in which case the framework assumes that it has read all the data for that Extractor instance. Thus, the extractor acts as in iterator of a subset of the data to be pulled. The extractor class also requires you to implement two more functions: getExpectedRecordCount and getHighWatermark. The record count method should return the number of expected records that this extractor is going to pull, the get high watermark method should return the high watermark for this extractor (e.g. it should return some value that represents up to what point in the Source this extractor will pull data).
+
+### HelloWorld Extractor and Source
+Gobblin has a HelloWorld Extractor and Source that shows a provides a simple implementation of reading and writing Avro files. The full code for the HelloWorldSource class can be found below.
+#### com.linkedin.uif.helloworld.source.HelloWorldSource
+    public class HelloWorldSource implements Source<String, String> {
+        private static final String SOURCE_FILE_LIST_KEY = "source.files";
+        private static final String SOURCE_FILE_KEY = "source.file";
+        private static final Splitter SPLITTER = Splitter.on(",")
+                .omitEmptyStrings()
+                .trimResults();
+
+        @Override
+        public List<WorkUnit> getWorkunits(SourceState state) {
+            Extract extract1 = new Extract(state, TableType.SNAPSHOT_ONLY,
+                               state.getProp(ConfigurationKeys.EXTRACT_NAMESPACE_NAME_KEY), "TestTable1");
+
+            Extract extract2 = new Extract(state, TableType.SNAPSHOT_ONLY,
+                               state.getProp(ConfigurationKeys.EXTRACT_NAMESPACE_NAME_KEY), "TestTable2");
+
+            String sourceFileList = state.getProp(SOURCE_FILE_LIST_KEY);
+            List<WorkUnit> workUnits = Lists.newArrayList();
+
+            List<String> list = SPLITTER.splitToList(sourceFileList);
+
+            for (int i = 0; i < list.size(); i++) {
+                WorkUnit workUnit = new WorkUnit(state, i % 2 == 0 ? extract1 : extract2);
+                workUnit.setProp(SOURCE_FILE_KEY, list.get(i));
+                workUnits.add(workUnit);
+            }
+            return workUnits;
+        }
+
+        @Override
+        public Extractor<String, String> getExtractor(WorkUnitState state) {
+            return new TestExtractor(state);
+        }
+
+        @Override
+        public void shutdown(SourceState state) {
+            // Do nothing
+        }
+    }
+The Source class creates a list of two WorkUnits for Gobblin to execute and returns them as a list. In order to construct the WorkUnits an Extract object needs to be created. An Extract object represents all the attributes necessary to pull a subset of the data. These properties include:
+
+1. TableType: The type of data being pulled; is it append only data (fact tables), snapshot + append data (dimension tables), or snapshot only data
+2. Namespace: A dot separated namespace path
+3. Table: Entity name
+
+The Source class is also paired with a pull file (shown below), note that the Source class has access to all key-value pairs created in the pull file. The getExtractor method has a very simple implementation - given a WorkUnitState it creates and returns a TestExtractor object. The code for TestExtractor is shown below.
+
+#### com.linkedin.uif.helloworld.extractor.HelloWorldExtractor
+    public class HelloWorldExtractor implements Extractor<String, String> {
+        private static final Logger log = LoggerFactory.getLogger(HelloWorldExtractor.class);
+        private static final String SOURCE_FILE_KEY = "source.file";
+
+        // Test Avro Schema
+        private static final String AVRO_SCHEMA =
+                "{\"namespace\": \"example.avro\",\n" +
+                " \"type\": \"record\",\n" +
+                " \"name\": \"User\",\n" +
+                " \"fields\": [\n" +
+                "     {\"name\": \"name\", \"type\": \"string\"},\n" +
+                "     {\"name\": \"favorite_number\",  \"type\": \"int\"},\n" +
+                "     {\"name\": \"favorite_color\", \"type\": \"string\"}\n" +
+                " ]\n" +
+                "}";
+
+        private static final int TOTAL_RECORDS = 1000;
+        private DataFileReader<GenericRecord> dataFileReader;
+
+        public HelloWorldExtractor(WorkUnitState workUnitState) {
+            Schema schema = new Schema.Parser().parse(AVRO_SCHEMA);
+            Path sourceFile = new Path(workUnitState.getWorkunit().getProp(SOURCE_FILE_KEY));
+
+            log.info("Reading from source file " + sourceFile);
+            DatumReader<GenericRecord> datumReader = new GenericDatumReader<GenericRecord>(schema);
+
+            try {
+                URI uri = URI.create(workUnitState.getProp(ConfigurationKeys.FS_URI_KEY, ConfigurationKeys.LOCAL_FS_URI));
+                FileSystem fs = FileSystem.get(uri, new Configuration());
+                fs.makeQualified(sourceFile);
+                this.dataFileReader = new DataFileReader<GenericRecord>(
+                                      new FsInput(sourceFile,
+                                      new Configuration()), datumReader);
+            } catch (IOException ioe) {
+                log.error("Failed to read the source file " + sourceFile, ioe);
+            }
+        }
+
+        @Override
+        public String getSchema() {
+            return AVRO_SCHEMA;
+        }
+
+        @Override
+        public String readRecord() {
+            if (this.dataFileReader == null) {
+                return null;
+            }
+            if (this.dataFileReader.hasNext()) {
+                return this.dataFileReader.next().toString();
+            }
+            return null;
+        }
+
+        @Override
+        public void close() throws IOException {
+            try {
+                this.dataFileReader.close();
+            } catch (IOException ioe) {
+                log.error("Error while closing avro file reader", ioe);
+            }
+        }
+
+        @Override
+        public long getExpectedRecordCount() {
+            return TOTAL_RECORDS;
+        }
+
+        @Override
+        public long getHighWatermark()
+        {
+          return 0;
+        }
+    }
+This extractor opens up an Avro file and creates a FileReader over the file. The FileReader object implements the Iterator interface, so the readRecord method becomes very simple. It queries the FileReader to see if has more records, if it does it returns the next record, if not it returns null. The schema is defined in line, but it can open up a connection to the Source and fetch the schema for the Entity that is being pulled. The expected record count is also defined in line, but once again the extractor can pull the value from the Source (e.g. SELECT COUNT() FROM EntityName).
+
+A pull file for the HelloWorld classes is shown below.
+#### helloworld.pull
+    job.name=HelloWorldFilePull
+    job.group=HelloWorldJobs
+    job.description=Simple job to pull move files
+ 
+    source.class=com.linkedin.uif.helloworld.source.HelloWorldSource
+ 
+    writer.destination.type=HDFS
+    writer.output.format=AVRO
+    writer.fs.uri=file://localhost/
+ 
+    data.publisher.type=com.linkedin.uif.publisher.BaseDataPublisher
+ 
+    source.files=<Insert location of files to copy>
+For the pull file, there are a few required properties necessary for all jobs. A list of config properties and their meanings can be found here: [Configuration Properties](Configuration-Properties)
+
+### Extending Protocol Specific Classes
+While any user is free to directly extend the Source and Extractor class, Gobblin also supports Extractors that implement commonly used protocols. These protocols fall into a few major categories (e.g. QueryBasedExtractor, FileBasedExtractor, etc.). Gobblin currently contains implementations of RestApiExtractor and SftpExtractor and the user is free to extend both of these classes in order to take advantage of existing implementations of both the protocols. For example, if a new data source extracts data using a REST service then the RestApiExtractor can easily be extended, which avoids the need to re-implement any REST logic. The layout of the classes is depicted below.
+    Extractor.java
+        QueryBasedExtractor.java
+            RestApiExtractor.java
+                SalesforceExtractor.java
+            JdbcExtractor.java
+                TeradataExtractor.java
+        FileBasedExtractor.java
+            SftpExtractor.java
+                ResponsysExtractor.java
+
+## Leveraging an Existing Source
+Once you have your source and extractor class, it is time to create some "pull" files to run the job. A pull file is a list of user specified configuration properties that are fed into the framework. Each WorkUnit has access to each key-value pair in the pull file, this allows the user to pass in Source specific parameters to the framework. An example of a production pull file is below.
+    # Job parameters
+    job.name=Salesforce_Contact
+    job.group=Salesforce_Core
+    job.description=Job to pull data from Contact table
+    job.schedule=0 0 0/1 * * ?
+
+    # Converter parameters
+    converter.classes=com.linkedin.uif.converter.avro.JsonIntermediateToAvroConverter,com.linkedin.uif.converter.LumosAttributesConverter
+    converter.avro.timestamp.format=yyyy-MM-dd'T'HH:mm:ss.SSS'Z',yyyy-MM-dd'T'HH:mm:ss.000+0000
+    converter.avro.date.format=yyyy-MM-dd
+    converter.avro.time.format=HH:mm:ss
+
+    # Writer parameters
+    writer.destination.type=HDFS
+    writer.output.format=AVRO
+    writer.fs.uri=file://localhost/
+
+    # Quality Checker and Publisher parameters
+    qualitychecker.task.policies=com.linkedin.uif.policies.count.RowCountPolicy,com.linkedin.uif.policies.schema.SchemaCompatibilityPolicy,com.linkedin.uif.policies.schema.LumosSchemaValidationPolicy
+    qualitychecker.task.policy.types=FAIL,OPTIONAL,OPTIONAL
+    qualitychecker.row.policies=com.linkedin.uif.policies.schema.SchemaRowCheckPolicy
+    qualitychecker.row.policy.types=ERR_FILE
+    data.publisher.type=com.linkedin.uif.publisher.BaseDataPublisher
+
+    # Extractor parameters
+    extract.namespace=Salesforce_Core
+    extract.table.type=snapshot_append
+    extract.delta.fields=SystemModstamp
+    extract.primary.key.fields=Id
+
+    # Source parameters
+    source.schema=Core
+    source.entity=Contact
+    source.extract.type=snapshot
+    source.watermark.type=timestamp
+    source.start.value=20140101000000
+    source.end.value=201403010000000
+    source.low.watermark.backup.secs=7200
+    source.is.watermark.override=true
+    source.timezone=UTC
+    source.max.number.of.partitions=2
+    source.partition.interval=2
+    source.fetch.size=2000
+    source.is.specific.api.active=true
+    source.timeout=7200000
+    source.class=com.linkedin.uif.source.extractor.extract.restapi.SalesforceSource
+A list of all configuration properties and there meanings can be found here: [Configuration Properties](Configuration-Properties)
+
+Extending the Framework
+-----------------------
+WIP
\ No newline at end of file
diff --git a/gobblin-admin/README.md b/gobblin-admin/README.md
deleted file mode 100644
index 1a9a1f9..0000000
--- a/gobblin-admin/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Gobblin Interfaces
-Gobblin comes with two tools for better understanding the state of
-Gobblin and the jobs executed. These interfaces are early in their
-development, will likely change, and have many new features planned. The
-two interfaces provided are a command line interface and a GUI,
-accessible via a web server. The current state of the interfaces relies
-on the [Job Execution History
-Store](https://github.com/linkedin/gobblin/wiki/Job%20Execution%20History%20Store),
-which must be enabled and running for the interfaces to work.
-
-## CLI
-Gobblin offers a command line interface out of the box. To run it, build
-and run Gobblin, then run `<code_dir>/bin/gobblin-admin.sh`. You'll
-likely want to alias that command, along with any additional Java
-options that you need to pass in.
-### Commands
-#### Jobs
-Use `gobblin-admin.sh jobs --help` for more additional help and for a
-list of all options provided. 
-
-Some common commands include:
-
-|Command
-|Result    |
-|-----------------------------------------------------------------|----------|
-|`gobblin-admin.sh jobs --list`                                   |Lists
-distinct job names|
-|`gobblin-admin.sh jobs --list --name JobName`                    |Lists
-the most recent executions of the given job name|
-|`gobblin-admin.sh jobs --details --id job_id`                    |Lists
-detailed information about the job execution with the given id|
-|`gobblin-admin.sh jobs --properties --<id|name> <job_id|JobName>`|Lists
-properties of the job|
-
-#### Tasks
-The CLI does not yet support the `tasks` command.
-
-## GUI
-The GUI is a lightweight Backbone.js frontend to the Job Execution
-History Store. 
-
-To enable it, you can set `admin.server.enabled` to true in your configuration file.
-The admin web server will be available on port 8000 by default, but this can be changed with
-the `admin.server.port` configuration key.
-
diff --git a/gobblin-admin/build.gradle b/gobblin-admin/build.gradle
deleted file mode 100644
index 0f60b82..0000000
--- a/gobblin-admin/build.gradle
+++ /dev/null
@@ -1,33 +0,0 @@
-// (c) 2015 NerdWallet. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-// this file except in compliance with the License. You may obtain a copy of the
-// License at  http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software distributed
-// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-// CONDITIONS OF ANY KIND, either express or implied.
-//
-
-apply plugin: 'java'
-
-repositories {
-    mavenCentral()
-    maven {
-        url "http://conjars.org/repo"
-    }
-}
-
-dependencies {
-    compile project(":gobblin-rest-service:gobblin-rest-client")
-    compile project(":gobblin-core")
-
-    compile externalDependency.jetty
-    compile externalDependency.commonsCli
-    compile externalDependency.slf4j
-    compile externalDependency.jodaTime
-
-    testCompile externalDependency.testng
-}
-
-classification="library"
diff --git a/gobblin-admin/src/main/java/gobblin/admin/AdminWebServer.java b/gobblin-admin/src/main/java/gobblin/admin/AdminWebServer.java
deleted file mode 100644
index 7d73be7..0000000
--- a/gobblin-admin/src/main/java/gobblin/admin/AdminWebServer.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
-*
-* Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-* this file except in compliance with the License. You may obtain a copy of the
-* License at  http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software distributed
-* under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-* CONDITIONS OF ANY KIND, either express or implied.
-*/
-package gobblin.admin;
-
-import com.google.common.base.Preconditions;
-import com.google.common.util.concurrent.AbstractIdleService;
-import gobblin.configuration.ConfigurationKeys;
-import org.eclipse.jetty.server.Handler;
-import org.eclipse.jetty.server.Request;
-import org.eclipse.jetty.server.Server;
-import org.eclipse.jetty.server.handler.AbstractHandler;
-import org.eclipse.jetty.server.handler.HandlerCollection;
-import org.eclipse.jetty.server.handler.ResourceHandler;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import javax.servlet.ServletException;
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.net.URI;
-import java.util.Properties;
-
-/**
- * Serves the admin UI interface using embedded Jetty.
- */
-public class AdminWebServer extends AbstractIdleService {
-    private static final Logger LOGGER = LoggerFactory.getLogger(AdminWebServer.class);
-
-    private final URI restServerUri;
-    private final URI serverUri;
-    protected Server server;
-
-    public AdminWebServer(Properties properties, URI restServerUri) {
-        Preconditions.checkNotNull(properties);
-        Preconditions.checkNotNull(restServerUri);
-
-        this.restServerUri = restServerUri;
-        int port = getPort(properties);
-        serverUri = URI.create(String.format("http://%s:%d", getHost(properties), port));
-    }
-
-    @Override
-    protected void startUp() throws Exception {
-        LOGGER.info("Starting the admin web server");
-
-        server = new Server(new InetSocketAddress(serverUri.getHost(), serverUri.getPort()));
-
-        HandlerCollection handlerCollection = new HandlerCollection();
-
-        handlerCollection.addHandler(buildSettingsHandler());
-        handlerCollection.addHandler(buildStaticResourceHandler());
-
-        server.setHandler(handlerCollection);
-        server.start();
-    }
-
-    private Handler buildSettingsHandler() {
-        final String responseTemplate =
-                "var Gobblin = window.Gobblin || {};" +
-                "Gobblin.settings = {restServerUrl:\"%s\"}";
-
-        return new AbstractHandler() {
-            @Override
-            public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
-                if (request.getRequestURI().equals("/js/settings.js")) {
-                    response.setContentType("application/javascript");
-                    response.setStatus(HttpServletResponse.SC_OK);
-                    response.getWriter().println(String.format(responseTemplate, restServerUri.toString()));
-                    baseRequest.setHandled(true);
-                }
-            }
-        };
-    }
-
-    private ResourceHandler buildStaticResourceHandler() {
-        ResourceHandler staticResourceHandler = new ResourceHandler();
-        staticResourceHandler.setDirectoriesListed(true);
-        staticResourceHandler.setWelcomeFiles(new String[]{"index.html"});
-
-        String staticDir = getClass().getClassLoader().getResource("static").toExternalForm();
-
-        staticResourceHandler.setResourceBase(staticDir);
-        return staticResourceHandler;
-    }
-
-    @Override
-    protected void shutDown() throws Exception {
-        if (server != null) {
-            server.stop();
-        }
-    }
-
-    private static int getPort(Properties properties) {
-        return Integer.parseInt(properties.getProperty(
-                ConfigurationKeys.ADMIN_SERVER_PORT_KEY,
-                ConfigurationKeys.DEFAULT_ADMIN_SERVER_PORT));
-    }
-
-    private static String getHost(Properties properties) {
-        return properties.getProperty(
-                ConfigurationKeys.ADMIN_SERVER_HOST_KEY,
-                ConfigurationKeys.DEFAULT_ADMIN_SERVER_HOST);
-    }
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/AdminClient.java b/gobblin-admin/src/main/java/gobblin/cli/AdminClient.java
deleted file mode 100644
index 98e0f41..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/AdminClient.java
+++ /dev/null
@@ -1,133 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
-*
-* Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-* this file except in compliance with the License. You may obtain a copy of the
-* License at  http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software distributed
-* under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-* CONDITIONS OF ANY KIND, either express or implied.
-*/
-package gobblin.cli;
-
-import com.google.common.base.Optional;
-import com.google.common.io.Closer;
-import com.linkedin.r2.RemoteInvocationException;
-import gobblin.configuration.ConfigurationKeys;
-import gobblin.rest.*;
-
-import java.io.IOException;
-import java.net.URI;
-import java.util.Collections;
-import java.util.List;
-
-/**
- * Simple wrapper around the JobExecutionInfoClient
- */
-public class AdminClient {
-    private final JobExecutionInfoClient client;
-    private Closer closer;
-
-    /**
-     * Creates a new client with the host and port specified.
-     */
-    public AdminClient(String host, int port) {
-        closer = Closer.create();
-
-        URI serverUri = URI.create(String.format("http://%s:%d/", host, port));
-        client = new JobExecutionInfoClient(serverUri.toString());
-        closer.register(client);
-    }
-
-    /**
-     * Close connections to the REST server
-     */
-    public void close() {
-        try {
-            closer.close();
-        } catch (IOException e) {
-            e.printStackTrace();
-        }
-    }
-
-    /**
-     * Retrieve a Gobblin job by its id.
-     *
-     * @param id                Id of the job to retrieve
-     * @return JobExecutionInfo representing the job
-     */
-    public Optional<JobExecutionInfo> queryByJobId(String id)
-            throws RemoteInvocationException {
-        JobExecutionQuery query = new JobExecutionQuery();
-        query.setIdType(QueryIdTypeEnum.JOB_ID);
-        query.setId(JobExecutionQuery.Id.create(id));
-        query.setLimit(1);
-
-        List<JobExecutionInfo> results = executeQuery(query);
-        return getFirstFromQueryResults(results);
-    }
-
-    /**
-     * Retrieve all jobs
-     *
-     * @param lookupType Query type
-     * @return List of all jobs (limited by results limit)
-     */
-    public List<JobExecutionInfo> queryAllJobs(QueryListType lookupType, int resultsLimit)
-            throws RemoteInvocationException {
-        JobExecutionQuery query = new JobExecutionQuery();
-        query.setIdType(QueryIdTypeEnum.LIST_TYPE);
-        query.setId(JobExecutionQuery.Id.create(lookupType));
-
-        // Disable properties and task executions (prevents response size from ballooning)
-        query.setJobProperties(ConfigurationKeys.JOB_RUN_ONCE_KEY + "," + ConfigurationKeys.JOB_SCHEDULE_KEY);
-        query.setIncludeTaskExecutions(false);
-
-        query.setLimit(resultsLimit);
-
-        return executeQuery(query);
-    }
-
-    /**
-     * Query jobs by name
-     *
-     * @param name         Name of the job to query for
-     * @param resultsLimit Max # of results to return
-     * @return List of jobs with the name (empty list if none can be found)
-     */
-    public List<JobExecutionInfo> queryByJobName(String name, int resultsLimit) throws RemoteInvocationException {
-        JobExecutionQuery query = new JobExecutionQuery();
-        query.setIdType(QueryIdTypeEnum.JOB_NAME);
-        query.setId(JobExecutionQuery.Id.create(name));
-        query.setIncludeTaskExecutions(false);
-        query.setLimit(resultsLimit);
-
-        return executeQuery(query);
-    }
-
-    /**
-     * Execute a query and coerce the result into a java List
-     * @param query Query to execute
-     * @return List of jobs that matched the query. (Empty list if none did).
-     * @throws RemoteInvocationException If the server throws an error
-     */
-    private List<JobExecutionInfo> executeQuery(JobExecutionQuery query)
-            throws RemoteInvocationException {
-        JobExecutionQueryResult result = this.client.get(query);
-
-        if (result != null && result.hasJobExecutions()) {
-            return result.getJobExecutions();
-        } else {
-            return Collections.emptyList();
-        }
-    }
-
-    private Optional<JobExecutionInfo> getFirstFromQueryResults(List<JobExecutionInfo> results) {
-        if (results == null || results.size() == 0) {
-            return Optional.absent();
-        }
-
-        return Optional.of(results.get(0));
-    }
-
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/Cli.java b/gobblin-admin/src/main/java/gobblin/cli/Cli.java
deleted file mode 100644
index e2900e6..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/Cli.java
+++ /dev/null
@@ -1,182 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License"); you may not use
- * this file except in compliance with the License. You may obtain a copy of the
- * License at  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed
- * under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, either express or implied.
- */
-
-package gobblin.cli;
-
-import java.util.*;
-
-import com.google.common.collect.ImmutableMap;
-import org.apache.commons.cli.*;
-
-/**
- * A command line interface for interacting with Gobblin.
- * From this tool, you should be able to:
- *  * Check the status of Gobblin jobs
- *  * View ...
- *
- * @author ahollenbach@nerdwallet.com
- */
-public class Cli {
-  private static Map<String, Command> commandList =
-          ImmutableMap.of(
-                  "jobs", (Command)new JobCommand()
-          );
-
-  static class GlobalOptions {
-    private final String adminServerHost;
-    private final int adminServerPort;
-
-    public GlobalOptions(String adminServerHost, int adminServerPort) {
-      this.adminServerHost = adminServerHost;
-      this.adminServerPort = adminServerPort;
-    }
-
-    public String getAdminServerHost() {
-      return adminServerHost;
-    }
-
-    public int getAdminServerPort() {
-      return adminServerPort;
-    }
-  }
-
-  /**
-   * Get the list of valid command names
-   * @return List of command names
-   */
-  public static Collection<String> getCommandNames() {
-    return commandList.keySet();
-  }
-
-  // Option long codes
-  private static final String HOST_OPT = "host";
-  private static final String PORT_OPT = "port";
-
-  private static final String DEFAULT_REST_SERVER_HOST = "localhost";
-  private static final int DEFAULT_REST_SERVER_PORT = 8080;
-
-
-  private String[] args;
-  private Options options;
-
-  public static void main(String[] args) {
-    Cli cli = new Cli(args);
-    cli.parseAndExecuteCommand();
-  }
-
-  /**
-   * Create a new Cli object.
-   * @param args Command line arguments
-     */
-  public Cli(String[] args) {
-    this.args = args;
-
-    this.options = new Options();
-
-    this.options.addOption("H", HOST_OPT, true, "Specify host (default:" + DEFAULT_REST_SERVER_HOST + ")");
-    this.options.addOption("P", PORT_OPT, true, "Specify port (default:" + DEFAULT_REST_SERVER_PORT + ")");
-  }
-
-  /**
-   * Parse and execute the appropriate command based on the args.
-   * The general flow looks like this:
-   *
-   * 1. Parse a set of global options (eg host/port for the admin server)
-   * 2. Parse out the command name
-   * 3. Pass the global options and any left over parameters to a command handler
-   */
-  public void parseAndExecuteCommand() {
-    CommandLineParser parser = new DefaultParser();
-    try {
-      CommandLine parsedOpts = parser.parse(this.options, this.args, true);
-      GlobalOptions globalOptions = createGlobalOptions(parsedOpts);
-
-      // Fetch the command and fail if there is ambiguity
-      String[] remainingArgs = parsedOpts.getArgs();
-      if (remainingArgs.length == 0) {
-        printHelpAndExit("Command not specified!");
-      }
-
-      String commandName = remainingArgs[0].toLowerCase();
-      remainingArgs = remainingArgs.length > 1 ?
-            Arrays.copyOfRange(remainingArgs, 1, remainingArgs.length) :
-            new String[]{};
-
-      Command command = commandList.get(commandName);
-      if (command == null) {
-        System.out.println("Command " + commandName + " not known.");
-        printHelpAndExit();
-      } else {
-        command.execute(globalOptions, remainingArgs);
-      }
-    } catch (ParseException e) {
-      printHelpAndExit("Ran into an error parsing args.");
-    }
-  }
-
-  /**
-   * Build the GlobalOptions information from the raw parsed options
-   * @param parsedOpts Options parsed from the cmd line
-   * @return
-     */
-  private GlobalOptions createGlobalOptions(CommandLine parsedOpts) {
-    String host = parsedOpts.hasOption(HOST_OPT) ?
-            parsedOpts.getOptionValue(HOST_OPT) : DEFAULT_REST_SERVER_HOST;
-    int port = DEFAULT_REST_SERVER_PORT;
-    try {
-      if (parsedOpts.hasOption(PORT_OPT)) {
-        port = Integer.parseInt(parsedOpts.getOptionValue(PORT_OPT));
-      }
-    } catch (NumberFormatException e) {
-      printHelpAndExit("The port must be a valid integer.");
-    }
-
-    return new GlobalOptions(host, port);
-  }
-
-  /**
-   * Print help and exit with a success code (0).
-   */
-  private void printHelpAndExit() {
-    System.out.println("Common usages:");
-    System.out.println("  gobblin-admin.sh jobs --list");
-    System.out.println("  gobblin-admin.sh jobs --list --name JobName");
-    System.out.println("  gobblin-admin.sh jobs --details --id job_id");
-    System.out.println("  gobblin-admin.sh jobs --properties --<id|name> <job_id|JobName>");
-    System.out.println();
-
-    printHelpAndExit(0);
-  }
-
-  /**
-   * Prints an error message, then prints the help and exits with an error code.
-   */
-  private void printHelpAndExit(String errorMessage) {
-    System.err.println(errorMessage);
-    printHelpAndExit(1);
-  }
-
-  /**
-   * Print help and exit with the specified code.
-   * @param exitCode The code to exit with
-   */
-  private void printHelpAndExit(int exitCode) {
-    HelpFormatter hf = new HelpFormatter();
-
-    hf.printHelp("gobblin-admin.sh <command> [options]", this.options);
-    System.out.println("Valid commands:");
-    for (String command : getCommandNames()) {
-      System.out.println(command);
-    }
-
-    System.exit(exitCode);
-  }
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/CliTablePrinter.java b/gobblin-admin/src/main/java/gobblin/cli/CliTablePrinter.java
deleted file mode 100644
index 15d663d..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/CliTablePrinter.java
+++ /dev/null
@@ -1,194 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
- *
- * Licensed under the Apache License, Version 2.0 (the "License"); you may not use
- * this file except in compliance with the License. You may obtain a copy of the
- * License at  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed
- * under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, either express or implied.
- */
-package gobblin.cli;
-
-import java.util.ArrayList;
-import java.util.List;
-
-import com.google.common.base.Preconditions;
-import com.google.common.primitives.Ints;
-
-
-/**
- * A format helper for CLI output. Unfortunately it only supports strings, so
- * values need to be converted previous to passing in. This is done in order to
- * support table-like formatting.
- * <p/>
- * It's recommended that this class is built using the inner {@link Builder} class.
- *
- * @author ahollenbach@nerdwallet.com
- */
-public class CliTablePrinter {
-  /**
-   * Labels for each columns
-   */
-  private List<String> labels;
-
-  /**
-   * A list of sprintf-style flag strings (corresponding to each column)
-   */
-  private List<String> flags;
-
-  /**
-   * Overall indentation of a table
-   */
-  private int indentation;
-
-  /**
-   * Number of spaces to place between columns
-   */
-  private int delimiterWidth;
-
-  /**
-   * Table of data to print
-   */
-  private List<List<String>> data;
-
-  /**
-   * The row format (generated by the constructor).
-   */
-  private String rowFormat;
-
-
-  public CliTablePrinter(List<String> labels, List<String> flags, int indentation, int delimiterWidth,
-      List<List<String>> data) {
-    Preconditions.checkArgument(data.size() > 0);
-    Preconditions.checkArgument(data.get(0).size() > 0);
-
-    if (labels != null) {
-      Preconditions.checkArgument(data.get(0).size() == labels.size());
-    }
-    if (flags != null) {
-      Preconditions.checkArgument(data.get(0).size() == flags.size());
-    }
-
-    this.labels = labels;
-    this.flags = flags;
-    this.indentation = indentation;
-    this.delimiterWidth = delimiterWidth;
-    this.data = data;
-
-    this.rowFormat = getRowFormat(getColumnMaxWidths());
-  }
-
-  /**
-   * Used to build a {@link CliTablePrinter} object.
-   */
-  public static final class Builder {
-    private List<String> labels;
-    private List<String> flags;
-    private int indentation;
-    private int delimiterWidth;
-    private List<List<String>> data;
-
-    public Builder() {
-      // Set defaults
-      this.delimiterWidth = 1;
-    }
-
-    public Builder labels(List<String> labels) {
-      this.labels = labels;
-      return this;
-    }
-
-    public Builder data(List<List<String>> data) {
-      this.data = data;
-      return this;
-    }
-
-    public Builder indentation(int indentation) {
-      this.indentation = indentation;
-      return this;
-    }
-
-    public Builder delimiterWidth(int delimiterWidth) {
-      this.delimiterWidth = delimiterWidth;
-      return this;
-    }
-
-    public Builder flags(List<String> flags) {
-      this.flags = flags;
-      return this;
-    }
-
-    public CliTablePrinter build() {
-      return new CliTablePrinter(this.labels, this.flags, this.indentation, this.delimiterWidth, this.data);
-    }
-  }
-
-  /**
-   * Prints the table of data
-   */
-  public void printTable() {
-    if (this.labels != null) {
-      System.out.printf(this.rowFormat, this.labels.toArray());
-    }
-    for (List<String> row : this.data) {
-      System.out.printf(this.rowFormat, row.toArray());
-    }
-  }
-
-  /**
-   * A function for determining the max widths of columns, accounting for labels and data.
-   *
-   * @return An array of maximum widths for the strings in each column
-   */
-  private List<Integer> getColumnMaxWidths() {
-    int numCols = data.get(0).size();
-    int[] widths = new int[numCols];
-
-    if (this.labels != null) {
-      for (int i=0; i<numCols; i++) {
-        widths[i] = this.labels.get(i).length();
-      }
-    }
-
-    for (List<String> row : this.data) {
-      for (int i=0;i<row.size(); i++) {
-        if (row.get(i) == null) {
-          widths[i] = Math.max(widths[i], 4);
-        } else {
-          widths[i] = Math.max(widths[i], row.get(i).length());
-        }
-      }
-    }
-
-    return Ints.asList(widths);
-  }
-
-  /**
-   * Generates a simple row format string given a set of widths
-   *
-   * @param widths A list of widths for each column in the table
-   * @return A row format for each row in the table
-   */
-  private String getRowFormat(List<Integer> widths) {
-    StringBuilder rowFormat = new StringBuilder(spaces(this.indentation));
-    for (int i=0; i< widths.size(); i++) {
-      rowFormat.append("%");
-      rowFormat.append(this.flags != null ? this.flags.get(i) : "");
-      rowFormat.append(widths.get(i).toString());
-      rowFormat.append("s");
-      rowFormat.append(spaces(this.delimiterWidth));
-    }
-    rowFormat.append("\n");
-
-    return rowFormat.toString();
-  }
-
-  private static String spaces(int numSpaces) {
-    StringBuilder sb = new StringBuilder();
-    for (int i=0; i<numSpaces; i++) {
-      sb.append(" ");
-    }
-    return sb.toString();
-  }
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/Command.java b/gobblin-admin/src/main/java/gobblin/cli/Command.java
deleted file mode 100644
index 9224176..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/Command.java
+++ /dev/null
@@ -1,18 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
-*
-* Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-* this file except in compliance with the License. You may obtain a copy of the
-* License at  http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software distributed
-* under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-* CONDITIONS OF ANY KIND, either express or implied.
-*/
-package gobblin.cli;
-
-/**
- * Represents a single command for the CLI
- */
-public interface Command {
-    void execute(Cli.GlobalOptions globalOptions, String[] otherArgs);
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/JobCommand.java b/gobblin-admin/src/main/java/gobblin/cli/JobCommand.java
deleted file mode 100644
index c865ec1..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/JobCommand.java
+++ /dev/null
@@ -1,198 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
-*
-* Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-* this file except in compliance with the License. You may obtain a copy of the
-* License at  http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software distributed
-* under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-* CONDITIONS OF ANY KIND, either express or implied.
-*/
-package gobblin.cli;
-
-import com.google.common.base.Optional;
-import com.google.common.collect.ImmutableMap;
-import com.linkedin.r2.RemoteInvocationException;
-import gobblin.rest.*;
-import org.apache.commons.cli.*;
-
-import java.util.List;
-import java.util.Map;
-
-/**
- * Logic to print out job state
- */
-public class JobCommand implements Command {
-    private Options options;
-
-    private static class CommandException extends Exception {
-        public CommandException(String msg) {
-            super(msg);
-        }
-    }
-
-    private interface SubCommand {
-        void execute(CommandLine parsedArgs, AdminClient adminClient, int resultsLimit)
-                throws CommandException;
-    }
-
-    private static final String HELP_OPT = "help";
-    private static final String DETAILS_OPT = "details";
-    private static final String LIST_OPT = "list";
-    private static final String NAME_OPT = "name";
-    private static final String ID_OPT = "id";
-    private static final String PROPS_OPT = "properties";
-
-    private static final String RECENT_OPT = "recent";
-
-    private static final int DEFAULT_RESULTS_LIMIT = 10;
-
-    private static Map<String, SubCommand> subCommandMap =
-            ImmutableMap.of(
-                    LIST_OPT, new ListAllItemsCommand(),
-                    DETAILS_OPT, new ListOneItemWithDetails(),
-                    PROPS_OPT, new ListItemsWithPropertiesCommand()
-            );
-
-
-    private SubCommand getAction(CommandLine parsedOpts) {
-        for (Map.Entry<String, SubCommand> entry : subCommandMap.entrySet()) {
-            if (parsedOpts.hasOption(entry.getKey())) {
-                return entry.getValue();
-            }
-        }
-
-        printHelpAndExit("Unknown subcommand");
-        throw new IllegalStateException("unreached...");
-    }
-
-    @Override
-    public void execute(Cli.GlobalOptions globalOptions, String[] otherArgs) {
-        this.options = createCommandLineOptions();
-        DefaultParser parser = new DefaultParser();
-        AdminClient adminClient = null;
-
-        try {
-            CommandLine parsedOpts = parser.parse(options, otherArgs);
-            int resultLimit = parseResultsLimit(parsedOpts);
-            adminClient = new AdminClient(globalOptions.getAdminServerHost(), globalOptions.getAdminServerPort());
-            try {
-                getAction(parsedOpts).execute(parsedOpts, adminClient, resultLimit);
-            } catch (CommandException e) {
-                printHelpAndExit(e.getMessage());
-            }
-        } catch (ParseException e) {
-            printHelpAndExit("Failed to parse jobs arguments: " + e.getMessage());
-        } finally {
-            if (adminClient != null) adminClient.close();
-        }
-    }
-
-    private static class ListAllItemsCommand implements SubCommand {
-        @Override
-        public void execute(CommandLine parsedOpts, AdminClient adminClient, int resultsLimit)
-        throws CommandException {
-            try {
-                if (parsedOpts.hasOption(NAME_OPT)) {
-                    JobInfoPrintUtils.printJobRuns(adminClient.queryByJobName(parsedOpts.getOptionValue(NAME_OPT), resultsLimit));
-                } else if (parsedOpts.hasOption(RECENT_OPT)) {
-                    JobInfoPrintUtils.printAllJobs(adminClient.queryAllJobs(QueryListType.RECENT, resultsLimit), resultsLimit);
-                } else {
-                    JobInfoPrintUtils.printAllJobs(adminClient.queryAllJobs(QueryListType.DISTINCT, resultsLimit), resultsLimit);
-                }
-            } catch (RemoteInvocationException e) {
-                throw new CommandException("Error talking to adminServer: " + e.getMessage());
-            }
-        }
-    }
-
-    private static class ListOneItemWithDetails implements SubCommand {
-        @Override
-        public void execute(CommandLine parsedOpts, AdminClient adminClient, int resultsLimit)
-                throws CommandException {
-            try {
-                if (parsedOpts.hasOption(ID_OPT)) {
-                    JobInfoPrintUtils.printJob(
-                            adminClient.queryByJobId(parsedOpts.getOptionValue(ID_OPT))
-                    );
-                } else {
-                    throw new CommandException("Please specify an id");
-                }
-            } catch (RemoteInvocationException e) {
-                throw new CommandException("Error talking to adminServer: " + e.getMessage());
-            }
-        }
-    }
-
-    private static class ListItemsWithPropertiesCommand implements SubCommand {
-        @Override
-        public void execute(CommandLine parsedOpts, AdminClient adminClient, int resultsLimit) throws CommandException {
-            try {
-                if (parsedOpts.hasOption(ID_OPT)) {
-                    JobInfoPrintUtils.printJobProperties(
-                            adminClient.queryByJobId(parsedOpts.getOptionValue(ID_OPT))
-                    );
-                } else if (parsedOpts.hasOption(NAME_OPT)) {
-                    List<JobExecutionInfo> infos = adminClient.queryByJobName(parsedOpts.getOptionValue(NAME_OPT), 1);
-                    if (infos.size() == 0) {
-                        System.out.println("No job by that name found");
-                    } else {
-                        JobInfoPrintUtils.printJobProperties(Optional.of(infos.get(0)));
-                    }
-                } else {
-                    throw new CommandException("Please specify a job id or name");
-                }
-            } catch (RemoteInvocationException e) {
-                throw new CommandException("Error talking to adminServer: " + e.getMessage());
-            }
-        }
-    }
-
-    private Options createCommandLineOptions() {
-        Options options = new Options();
-
-        OptionGroup actionGroup = new OptionGroup();
-        actionGroup.addOption(new Option("h", HELP_OPT, false, "Shows the help message."));
-        actionGroup.addOption(new Option("d", DETAILS_OPT, false, "Show details about a job/task."));
-        actionGroup.addOption(new Option("l", LIST_OPT, false, "List jobs/tasks."));
-        actionGroup.addOption(new Option("p", PROPS_OPT, false, "Fetch properties with the query."));
-        actionGroup.setRequired(true);
-        options.addOptionGroup(actionGroup);
-
-        OptionGroup idGroup = new OptionGroup();
-        idGroup.addOption(new Option("j", NAME_OPT, true, "Find job(s) matching given job name."));
-        idGroup.addOption(new Option("i", ID_OPT, true, "Find the job/task with the given id."));
-        options.addOptionGroup(idGroup);
-
-        options.addOption("n", true, "Limit the number of results returned. (default:" + DEFAULT_RESULTS_LIMIT + ")");
-        options.addOption("r", RECENT_OPT, false, "List the most recent jobs (instead of a list of unique jobs)");
-
-        return options;
-    }
-
-    private int parseResultsLimit(CommandLine parsedOpts) {
-        if (parsedOpts.hasOption("n")) {
-            try {
-                return Integer.parseInt(parsedOpts.getOptionValue("n"));
-            } catch (NumberFormatException e) {
-                printHelpAndExit("Could not parse integer value for option n.");
-                return 0;
-            }
-        } else {
-            return DEFAULT_RESULTS_LIMIT;
-        }
-    }
-
-    /**
-     * Print help and exit with the specified code.
-     */
-    private void printHelpAndExit(String errorMsg) {
-        System.out.println(errorMsg);
-
-        HelpFormatter hf = new HelpFormatter();
-
-        hf.printHelp("gobblin-admin.sh jobs [options]", this.options);
-
-        System.exit(1);
-    }
-}
diff --git a/gobblin-admin/src/main/java/gobblin/cli/JobInfoPrintUtils.java b/gobblin-admin/src/main/java/gobblin/cli/JobInfoPrintUtils.java
deleted file mode 100644
index 69cf0d0..0000000
--- a/gobblin-admin/src/main/java/gobblin/cli/JobInfoPrintUtils.java
+++ /dev/null
@@ -1,250 +0,0 @@
-/* (c) 2015 NerdWallet All rights reserved.
-*
-* Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-* this file except in compliance with the License. You may obtain a copy of the
-* License at  http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software distributed
-* under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-* CONDITIONS OF ANY KIND, either express or implied.
-*/
-package gobblin.cli;
-
-import com.google.common.base.Optional;
-import com.linkedin.data.template.StringMap;
-import gobblin.configuration.ConfigurationKeys;
-import gobblin.metrics.MetricNames;
-import gobblin.rest.*;
-import org.joda.time.Period;
-import org.joda.time.format.DateTimeFormatter;
-import org.joda.time.format.ISODateTimeFormat;
-import org.joda.time.format.PeriodFormat;
-import org.joda.time.format.PeriodFormatter;
-
-import java.text.DecimalFormat;
-import java.text.NumberFormat;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Map;
-
-/**
- * Utility methods to print out various pieces of info about jobs
- */
-public class JobInfoPrintUtils {
-    private static NumberFormat decimalFormatter = new DecimalFormat("#0.00");
-    private static DateTimeFormatter dateTimeFormatter = ISODateTimeFormat.dateHourMinuteSecond();
-    private static PeriodFormatter periodFormatter = PeriodFormat.getDefault();
-
-    /**
-     * Extracts the schedule from a job execution.
-     * <p/>
-     * If the job was in run once mode, it will return that, otherwise it will return the schedule.
-     *
-     * @param jobInfo A job execution info to extract from
-     * @return "RUN_ONCE", the Quartz schedule string, or "UNKNOWN" if there were no job properties
-     */
-    public static String extractJobSchedule(JobExecutionInfo jobInfo) {
-        if (jobInfo.hasJobProperties() && jobInfo.getJobProperties().size() > 0) {
-            StringMap props = jobInfo.getJobProperties();
-
-            if (props.containsKey(ConfigurationKeys.JOB_RUN_ONCE_KEY) ||
-                    !props.containsKey(ConfigurationKeys.JOB_SCHEDULE_KEY)) {
-                return "RUN_ONCE";
-            } else if (props.containsKey(ConfigurationKeys.JOB_SCHEDULE_KEY)) {
-                return props.get(ConfigurationKeys.JOB_SCHEDULE_KEY);
-            }
-        }
-        return "UNKNOWN";
-    }
-
-    /**
-     * Print a table describing a bunch of individual job executions.
-     * @param jobExecutionInfos Job execution status to print
-     */
-    public static void printJobRuns(List<JobExecutionInfo> jobExecutionInfos) {
-        if (jobExecutionInfos == null) {
-            System.err.println("No job executions found.");
-            System.exit(1);
-        }
-
-        List<String> labels = Arrays.asList("Job Id", "State", "Schedule", "Completed Tasks", "Launched Tasks",
-                "Start Time", "End Time", "Duration (s)");
-        List<String> flags = Arrays.asList("-", "-", "-", "", "", "-", "-", "-");
-        List<List<String>> data = new ArrayList<>();
-        for (JobExecutionInfo jobInfo : jobExecutionInfos) {
-            List<String> entry = new ArrayList<>();
-            entry.add(jobInfo.getJobId());
-            entry.add(jobInfo.getState().toString());
-            entry.add(extractJobSchedule(jobInfo));
-            entry.add(jobInfo.getCompletedTasks().toString());
-            entry.add(jobInfo.getLaunchedTasks().toString());
-            entry.add(dateTimeFormatter.print(jobInfo.getStartTime()));
-            entry.add(dateTimeFormatter.print(jobInfo.getEndTime()));
-            entry.add(jobInfo.getState() == JobStateEnum.COMMITTED ?
-                    decimalFormatter.format(jobInfo.getDuration() / 1000.0) : "-");
-            data.add(entry);
-        }
-        new CliTablePrinter.Builder()
-                .labels(labels)
-                .data(data)
-                .flags(flags)
-                .delimiterWidth(2)
-                .build()
-                .printTable();
-    }
-
-    /**
-     * Print summary information about a bunch of jobs in the system
-     * @param jobExecutionInfos List of jobs
-     * @param resultsLimit original result limit
-     */
-    public static void printAllJobs(List<JobExecutionInfo> jobExecutionInfos, int resultsLimit) {
-        if (jobExecutionInfos == null) {
-            System.err.println("No jobs found.");
-            System.exit(1);
-        }
-
-        List<String> labels = Arrays.asList("Job Name", "State", "Last Run Started", "Last Run Completed",
-                "Schedule", "Last Run Records Processed", "Last Run Records Failed");
-        List<String> flags = Arrays.asList("-", "-", "-", "-", "-", "", "");
-        List<List<String>> data = new ArrayList<>();
-        for (JobExecutionInfo jobInfo : jobExecutionInfos) {
-            List<String> entry = new ArrayList<>();
-            entry.add(jobInfo.getJobName());
-            entry.add(jobInfo.getState().toString());
-            entry.add(dateTimeFormatter.print(jobInfo.getStartTime()));
-            entry.add(dateTimeFormatter.print(jobInfo.getEndTime()));
-
-            entry.add(extractJobSchedule(jobInfo));
-
-            // Add metrics
-            MetricArray metrics = jobInfo.getMetrics();
-            Double recordsProcessed = null;
-            Double recordsFailed = null;
-            try {
-                for (Metric metric : metrics) {
-                    if (metric.getName().equals(MetricNames.ExtractorMetrics.RECORDS_READ_METER)) {
-                        recordsProcessed = Double.parseDouble(metric.getValue());
-                    } else if (metric.getName().equals(MetricNames.ExtractorMetrics.RECORDS_FAILED_METER)) {
-                        recordsFailed = Double.parseDouble(metric.getValue());
-                    }
-                }
-
-                if (recordsProcessed != null && recordsFailed != null) {
-                    entry.add(recordsProcessed.toString());
-                    entry.add(recordsFailed.toString());
-                }
-            } catch (NumberFormatException ex) {
-                System.err.println("Failed to process metrics");
-            }
-            if (recordsProcessed == null || recordsFailed == null) {
-                entry.add("-");
-                entry.add("-");
-            }
-
-            data.add(entry);
-        }
-        new CliTablePrinter.Builder()
-                .labels(labels)
-                .data(data)
-                .flags(flags)
-                .delimiterWidth(2)
-                .build()
-                .printTable();
-
-        if (jobExecutionInfos.size() == resultsLimit) {
-            System.out.println("\nWARNING: There may be more jobs (# of results is equal to the limit)");
-        }
-    }
-
-    /**
-     * Print information about one specific job.
-     * @param jobExecutionInfoOptional Job info to print
-     */
-    public static void printJob(Optional<JobExecutionInfo> jobExecutionInfoOptional) {
-        if (!jobExecutionInfoOptional.isPresent()) {
-            System.err.println("Job id not found.");
-            return;
-        }
-
-        JobExecutionInfo jobExecutionInfo = jobExecutionInfoOptional.get();
-        List<List<String>> data = new ArrayList<>();
-        List<String> flags = Arrays.asList("", "-");
-
-        data.add(Arrays.asList("Job Name", jobExecutionInfo.getJobName()));
-        data.add(Arrays.asList("Job Id", jobExecutionInfo.getJobId()));
-        data.add(Arrays.asList("State", jobExecutionInfo.getState().toString()));
-        data.add(Arrays.asList("Completed/Launched Tasks",
-                String.format("%d/%d", jobExecutionInfo.getCompletedTasks(), jobExecutionInfo.getLaunchedTasks())));
-        data.add(Arrays.asList("Start Time", dateTimeFormatter.print(jobExecutionInfo.getStartTime())));
-        data.add(Arrays.asList("End Time", dateTimeFormatter.print(jobExecutionInfo.getEndTime())));
-        data.add(Arrays.asList("Duration", jobExecutionInfo.getState() == JobStateEnum.COMMITTED ? periodFormatter
-                .print(new Period(jobExecutionInfo.getDuration().longValue())) : "-"));
-        data.add(Arrays.asList("Tracking URL", jobExecutionInfo.getTrackingUrl()));
-        data.add(Arrays.asList("Launcher Type", jobExecutionInfo.getLauncherType().name()));
-
-        new CliTablePrinter.Builder()
-                .data(data)
-                .flags(flags)
-                .delimiterWidth(2)
-                .build()
-                .printTable();
-
-        JobInfoPrintUtils.printMetrics(jobExecutionInfo.getMetrics());
-    }
-
-    /**
-     * Print properties of a specific job
-     * @param jobExecutionInfoOptional
-     */
-    public static void printJobProperties(Optional<JobExecutionInfo> jobExecutionInfoOptional) {
-        if (!jobExecutionInfoOptional.isPresent()) {
-            System.err.println("Job not found.");
-            return;
-        }
-        List<List<String>> data = new ArrayList<>();
-        List<String> flags = Arrays.asList("", "-");
-        List<String> labels = Arrays.asList("Property Key", "Property Value");
-
-        for (Map.Entry<String, String> entry : jobExecutionInfoOptional.get().getJobProperties().entrySet()) {
-            data.add(Arrays.asList(entry.getKey(), entry.getValue()));
-        }
-
-        new CliTablePrinter.Builder()
-                .labels(labels)
-                .data(data)
-                .flags(flags)
-                .delimiterWidth(2)
-                .build()
-                .printTable();
-    }
-
-    /**
-     * Print out various metrics
-     * @param metrics Metrics to print
-     */
-    private static void printMetrics(MetricArray metrics) {
-        System.out.println();
-
-        if (metrics.size() == 0) {
-            System.out.println("No metrics found.");
-            return;
-        }
-
-        List<List<String>> data = new ArrayList<>();
-        List<String> flags = Arrays.asList("", "-");
-
-        for (Metric metric : metrics) {
-            data.add(Arrays.asList(metric.getName(), metric.getValue()));
-        }
-
-        new CliTablePrinter.Builder()
-                .data(data)
-                .flags(flags)
-                .delimiterWidth(2)
-                .build()
-                .printTable();
-    }
-
-}
diff --git a/gobblin-admin/src/main/resources/static/config.json b/gobblin-admin/src/main/resources/static/config.json
deleted file mode 100755
index f509d7e..0000000
--- a/gobblin-admin/src/main/resources/static/config.json
+++ /dev/null
@@ -1,434 +0,0 @@
-{
-  "vars": {
-    "@gray-base": "#000",
-    "@gray-darker": "lighten(@gray-base, 13.5%)",
-    "@gray-dark": "lighten(@gray-base, 20%)",
-    "@gray": "lighten(@gray-base, 33.5%)",
-    "@gray-light": "lighten(@gray-base, 46.7%)",
-    "@gray-lighter": "lighten(@gray-base, 93.5%)",
-    "@brand-primary": "#ffc700",
-    "@brand-success": "#159876",
-    "@brand-info": "#2c3a80",
-    "@brand-warning": "#fd820a",
-    "@brand-danger": "#eb172e",
-    "@body-bg": "#fafafa",
-    "@text-color": "@gray-dark",
-    "@link-color": "@brand-info",
-    "@link-hover-color": "darken(@link-color, 5%)",
-    "@link-hover-decoration": "underline",
-    "@font-family-sans-serif": "\"Open Sans\", \"Helvetica Neue\", Helvetica, Arial, sans-serif",
-    "@font-family-serif": "Georgia, \"Times New Roman\", Times, serif",
-    "@font-family-monospace": "Menlo, Monaco, Consolas, \"Courier New\", monospace",
-    "@font-family-base": "@font-family-sans-serif",
-    "@font-size-base": "14px",
-    "@font-size-large": "ceil((@font-size-base * 1.25))",
-    "@font-size-small": "ceil((@font-size-base * 0.85))",
-    "@font-size-h1": "floor((@font-size-base * 2.6))",
-    "@font-size-h2": "floor((@font-size-base * 2.15))",
-    "@font-size-h3": "ceil((@font-size-base * 1.7))",
-    "@font-size-h4": "ceil((@font-size-base * 1.25))",
-    "@font-size-h5": "@font-size-base",
-    "@font-size-h6": "ceil((@font-size-base * 0.85))",
-    "@line-height-base": "1.428571429",
-    "@line-height-computed": "floor((@font-size-base * @line-height-base))",
-    "@headings-font-family": "Montserrat, \"Helvetica Neue\", Helvetica, Arial, sans-serif",
-    "@headings-font-weight": "500",
-    "@headings-line-height": "1.1",
-    "@headings-color": "inherit",
-    "@icon-font-path": "\"../fonts/\"",
-    "@icon-font-name": "\"glyphicons-halflings-regular\"",
-    "@icon-font-svg-id": "\"glyphicons_halflingsregular\"",
-    "@padding-base-vertical": "6px",
-    "@padding-base-horizontal": "12px",
-    "@padding-large-vertical": "10px",
-    "@padding-large-horizontal": "16px",
-    "@padding-small-vertical": "5px",
-    "@padding-small-horizontal": "10px",
-    "@padding-xs-vertical": "1px",
-    "@padding-xs-horizontal": "5px",
-    "@line-height-large": "1.3333333",
-    "@line-height-small": "1.5",
-    "@border-radius-base": "10px",
-    "@border-radius-large": "12px",
-    "@border-radius-small": "8px",
-    "@component-active-color": "#fff",
-    "@component-active-bg": "@brand-primary",
-    "@caret-width-base": "4px",
-    "@caret-width-large": "5px",
-    "@table-cell-padding": "8px",
-    "@table-condensed-cell-padding": "5px",
-    "@table-bg": "transparent",
-    "@table-bg-accent": "#f9f9f9",
-    "@table-bg-hover": "#f5f5f5",
-    "@table-bg-active": "@table-bg-hover",
-    "@table-border-color": "#ddd",
-    "@btn-font-weight": "normal",
-    "@btn-default-color": "#333",
-    "@btn-default-bg": "#fff",
-    "@btn-default-border": "#ccc",
-    "@btn-primary-color": "#fff",
-    "@btn-primary-bg": "@brand-primary",
-    "@btn-primary-border": "darken(@btn-primary-bg, 3%)",
-    "@btn-success-color": "#fff",
-    "@btn-success-bg": "@brand-success",
-    "@btn-success-border": "darken(@btn-success-bg, 5%)",
-    "@btn-info-color": "#fff",
-    "@btn-info-bg": "@brand-info",
-    "@btn-info-border": "darken(@btn-info-bg, 5%)",
-    "@btn-warning-color": "#fff",
-    "@btn-warning-bg": "@brand-warning",
-    "@btn-warning-border": "darken(@btn-warning-bg, 5%)",
-    "@btn-danger-color": "#fff",
-    "@btn-danger-bg": "@brand-danger",
-    "@btn-danger-border": "darken(@btn-danger-bg, 5%)",
-    "@btn-link-disabled-color": "@gray-light",
-    "@btn-border-radius-base": "@border-radius-base",
-    "@btn-border-radius-large": "@border-radius-large",
-    "@btn-border-radius-small": "@border-radius-small",
-    "@input-bg": "#fff",
-    "@input-bg-disabled": "@gray-lighter",
-    "@input-color": "@gray",
-    "@input-border": "#ccc",
-    "@input-border-radius": "@border-radius-base",
-    "@input-border-radius-large": "@border-radius-large",
-    "@input-border-radius-small": "@border-radius-small",
-    "@input-border-focus": "#66afe9",
-    "@input-color-placeholder": "#999",
-    "@input-height-base": "(@line-height-computed + (@padding-base-vertical * 2) + 2)",
-    "@input-height-large": "(ceil(@font-size-large * @line-height-large) + (@padding-large-vertical * 2) + 2)",
-    "@input-height-small": "(floor(@font-size-small * @line-height-small) + (@padding-small-vertical * 2) + 2)",
-    "@form-group-margin-bottom": "15px",
-    "@legend-color": "@gray-dark",
-    "@legend-border-color": "#e5e5e5",
-    "@input-group-addon-bg": "@gray-lighter",
-    "@input-group-addon-border-color": "@input-border",
-    "@cursor-disabled": "not-allowed",
-    "@dropdown-bg": "#fff",
-    "@dropdown-border": "rgba(0,0,0,.15)",
-    "@dropdown-fallback-border": "#ccc",
-    "@dropdown-divider-bg": "#e5e5e5",
-    "@dropdown-link-color": "@gray-dark",
-    "@dropdown-link-hover-color": "darken(@gray-dark, 5%)",
-    "@dropdown-link-hover-bg": "#f5f5f5",
-    "@dropdown-link-active-color": "@component-active-color",
-    "@dropdown-link-active-bg": "@component-active-bg",
-    "@dropdown-link-disabled-color": "@gray-light",
-    "@dropdown-header-color": "@gray-light",
-    "@dropdown-caret-color": "#000",
-    "@screen-xs": "480px",
-    "@screen-xs-min": "@screen-xs",
-    "@screen-phone": "@screen-xs-min",
-    "@screen-sm": "768px",
-    "@screen-sm-min": "@screen-sm",
-    "@screen-tablet": "@screen-sm-min",
-    "@screen-md": "992px",
-    "@screen-md-min": "@screen-md",
-    "@screen-desktop": "@screen-md-min",
-    "@screen-lg": "1200px",
-    "@screen-lg-min": "@screen-lg",
-    "@screen-lg-desktop": "@screen-lg-min",
-    "@screen-xs-max": "(@screen-sm-min - 1)",
-    "@screen-sm-max": "(@screen-md-min - 1)",
-    "@screen-md-max": "(@screen-lg-min - 1)",
-    "@grid-columns": "12",
-    "@grid-gutter-width": "30px",
-    "@grid-float-breakpoint": "@screen-sm-min",
-    "@grid-float-breakpoint-max": "(@grid-float-breakpoint - 1)",
-    "@container-tablet": "(720px + @grid-gutter-width)",
-    "@container-sm": "@container-tablet",
-    "@container-desktop": "(940px + @grid-gutter-width)",
-    "@container-md": "@container-desktop",
-    "@container-large-desktop": "(1140px + @grid-gutter-width)",
-    "@container-lg": "@container-large-desktop",
-    "@navbar-height": "60px",
-    "@navbar-margin-bottom": "0",
-    "@navbar-border-radius": "0",
-    "@navbar-padding-horizontal": "floor((@grid-gutter-width / 2))",
-    "@navbar-padding-vertical": "((@navbar-height - @line-height-computed) / 2)",
-    "@navbar-collapse-max-height": "340px",
-    "@navbar-default-color": "@gray-lighter",
-    "@navbar-default-bg": "lighten(#131425, 5%)",
-    "@navbar-default-border": "0",
-    "@navbar-default-link-color": "@gray-lighter",
-    "@navbar-default-link-hover-color": "#fff",
-    "@navbar-default-link-hover-bg": "lighten(@brand-primary, 15%)",
-    "@navbar-default-link-active-color": "#fff",
-    "@navbar-default-link-active-bg": "@brand-primary",
-    "@navbar-default-link-disabled-color": "#ccc",
-    "@navbar-default-link-disabled-bg": "transparent",
-    "@navbar-default-brand-color": "@brand-primary",
-    "@navbar-default-brand-hover-color": "darken(@navbar-default-brand-color, 10%)",
-    "@navbar-default-brand-hover-bg": "transparent",
-    "@navbar-default-toggle-hover-bg": "#ddd",
-    "@navbar-default-toggle-icon-bar-bg": "#888",
-    "@navbar-default-toggle-border-color": "#ddd",
-    "@navbar-inverse-color": "lighten(@gray-light, 15%)",
-    "@navbar-inverse-bg": "#222",
-    "@navbar-inverse-border": "darken(@navbar-inverse-bg, 10%)",
-    "@navbar-inverse-link-color": "lighten(@gray-light, 15%)",
-    "@navbar-inverse-link-hover-color": "#fff",
-    "@navbar-inverse-link-hover-bg": "transparent",
-    "@navbar-inverse-link-active-color": "@navbar-inverse-link-hover-color",
-    "@navbar-inverse-link-active-bg": "darken(@navbar-inverse-bg, 10%)",
-    "@navbar-inverse-link-disabled-color": "#444",
-    "@navbar-inverse-link-disabled-bg": "transparent",
-    "@navbar-inverse-brand-color": "@navbar-inverse-link-color",
-    "@navbar-inverse-brand-hover-color": "#fff",
-    "@navbar-inverse-brand-hover-bg": "transparent",
-    "@navbar-inverse-toggle-hover-bg": "#333",
-    "@navbar-inverse-toggle-icon-bar-bg": "#fff",
-    "@navbar-inverse-toggle-border-color": "#333",
-    "@nav-link-padding": "10px 15px",
-    "@nav-link-hover-bg": "@gray-lighter",
-    "@nav-disabled-link-color": "@gray-light",
-    "@nav-disabled-link-hover-color": "@gray-light",
-    "@nav-tabs-border-color": "#ddd",
-    "@nav-tabs-link-hover-border-color": "@gray-lighter",
-    "@nav-tabs-active-link-hover-bg": "@body-bg",
-    "@nav-tabs-active-link-hover-color": "@gray",
-    "@nav-tabs-active-link-hover-border-color": "#ddd",
-    "@nav-tabs-justified-link-border-color": "#ddd",
-    "@nav-tabs-justified-active-link-border-color": "@body-bg",
-    "@nav-pills-border-radius": "@border-radius-base",
-    "@nav-pills-active-link-hover-bg": "@component-active-bg",
-    "@nav-pills-active-link-hover-color": "@component-active-color",
-    "@pagination-color": "@link-color",
-    "@pagination-bg": "#fff",
-    "@pagination-border": "#ddd",
-    "@pagination-hover-color": "@link-hover-color",
-    "@pagination-hover-bg": "@gray-lighter",
-    "@pagination-hover-border": "#ddd",
-    "@pagination-active-color": "#fff",
-    "@pagination-active-bg": "@brand-primary",
-    "@pagination-active-border": "@brand-primary",
-    "@pagination-disabled-color": "@gray-light",
-    "@pagination-disabled-bg": "#fff",
-    "@pagination-disabled-border": "#ddd",
-    "@pager-bg": "@pagination-bg",
-    "@pager-border": "@pagination-border",
-    "@pager-border-radius": "15px",
-    "@pager-hover-bg": "@pagination-hover-bg",
-    "@pager-active-bg": "@pagination-active-bg",
-    "@pager-active-color": "@pagination-active-color",
-    "@pager-disabled-color": "@pagination-disabled-color",
-    "@jumbotron-padding": "30px",
-    "@jumbotron-color": "inherit",
-    "@jumbotron-bg": "@gray-lighter",
-    "@jumbotron-heading-color": "inherit",
-    "@jumbotron-font-size": "ceil((@font-size-base * 1.5))",
-    "@jumbotron-heading-font-size": "ceil((@font-size-base * 4.5))",
-    "@state-success-text": "@brand-success",
-    "@state-success-bg": "lighten(@brand-success, 55%)",
-    "@state-success-border": "darken(spin(@state-success-bg, -10), 5%)",
-    "@state-info-text": "@brand-info",
-    "@state-info-bg": "lighten(@brand-info, 60%)",
-    "@state-info-border": "darken(spin(@state-info-bg, -10), 7%)",
-    "@state-warning-text": "@brand-warning",
-    "@state-warning-bg": "lighten(@brand-warning, 40%)",
-    "@state-warning-border": "darken(spin(@state-warning-bg, -10), 3%)",
-    "@state-danger-text": "@brand-danger",
-    "@state-danger-bg": "lighten(@brand-danger, 40%)",
-    "@state-danger-border": "darken(spin(@state-danger-bg, -10), 3%)",
-    "@tooltip-max-width": "200px",
-    "@tooltip-color": "#fff",
-    "@tooltip-bg": "#000",
-    "@tooltip-opacity": ".9",
-    "@tooltip-arrow-width": "5px",
-    "@tooltip-arrow-color": "@tooltip-bg",
-    "@popover-bg": "#fff",
-    "@popover-max-width": "276px",
-    "@popover-border-color": "rgba(0,0,0,.2)",
-    "@popover-fallback-border-color": "#ccc",
-    "@popover-title-bg": "darken(@popover-bg, 3%)",
-    "@popover-arrow-width": "10px",
-    "@popover-arrow-color": "@popover-bg",
-    "@popover-arrow-outer-width": "(@popover-arrow-width + 1)",
-    "@popover-arrow-outer-color": "fadein(@popover-border-color, 5%)",
-    "@popover-arrow-outer-fallback-color": "darken(@popover-fallback-border-color, 20%)",
-    "@label-default-bg": "@gray-light",
-    "@label-primary-bg": "@brand-primary",
-    "@label-success-bg": "@brand-success",
-    "@label-info-bg": "@brand-info",
-    "@label-warning-bg": "@brand-warning",
-    "@label-danger-bg": "@brand-danger",
-    "@label-color": "#fff",
-    "@label-link-hover-color": "#fff",
-    "@modal-inner-padding": "15px",
-    "@modal-title-padding": "15px",
-    "@modal-title-line-height": "@line-height-base",
-    "@modal-content-bg": "#fff",
-    "@modal-content-border-color": "rgba(0,0,0,.2)",
-    "@modal-content-fallback-border-color": "#999",
-    "@modal-backdrop-bg": "#000",
-    "@modal-backdrop-opacity": ".5",
-    "@modal-header-border-color": "#e5e5e5",
-    "@modal-footer-border-color": "@modal-header-border-color",
-    "@modal-lg": "900px",
-    "@modal-md": "600px",
-    "@modal-sm": "300px",
-    "@alert-padding": "15px",
-    "@alert-border-radius": "@border-radius-base",
-    "@alert-link-font-weight": "bold",
-    "@alert-success-bg": "@state-success-bg",
-    "@alert-success-text": "@state-success-text",
-    "@alert-success-border": "@state-success-border",
-    "@alert-info-bg": "@state-info-bg",
-    "@alert-info-text": "@state-info-text",
-    "@alert-info-border": "@state-info-border",
-    "@alert-warning-bg": "@state-warning-bg",
-    "@alert-warning-text": "@state-warning-text",
-    "@alert-warning-border": "@state-warning-border",
-    "@alert-danger-bg": "@state-danger-bg",
-    "@alert-danger-text": "@state-danger-text",
-    "@alert-danger-border": "@state-danger-border",
-    "@progress-bg": "#f5f5f5",
-    "@progress-bar-color": "#fff",
-    "@progress-border-radius": "@border-radius-base",
-    "@progress-bar-bg": "@brand-primary",
-    "@progress-bar-success-bg": "@brand-success",
-    "@progress-bar-warning-bg": "@brand-warning",
-    "@progress-bar-danger-bg": "@brand-danger",
-    "@progress-bar-info-bg": "@brand-info",
-    "@list-group-bg": "#fff",
-    "@list-group-border": "#ddd",
-    "@list-group-border-radius": "@border-radius-base",
-    "@list-group-hover-bg": "#f5f5f5",
-    "@list-group-active-color": "@component-active-color",
-    "@list-group-active-bg": "@component-active-bg",
-    "@list-group-active-border": "@list-group-active-bg",
-    "@list-group-active-text-color": "lighten(@list-group-active-bg, 40%)",
-    "@list-group-disabled-color": "@gray-light",
-    "@list-group-disabled-bg": "@gray-lighter",
-    "@list-group-disabled-text-color": "@list-group-disabled-color",
-    "@list-group-link-color": "#555",
-    "@list-group-link-hover-color": "@list-group-link-color",
-    "@list-group-link-heading-color": "#333",
-    "@panel-bg": "#fff",
-    "@panel-body-padding": "15px",
-    "@panel-heading-padding": "10px 15px",
-    "@panel-footer-padding": "@panel-heading-padding",
-    "@panel-border-radius": "@border-radius-base",
-    "@panel-inner-border": "#ddd",
-    "@panel-footer-bg": "#f5f5f5",
-    "@panel-default-text": "@gray-dark",
-    "@panel-default-border": "#ddd",
-    "@panel-default-heading-bg": "#f5f5f5",
-    "@panel-primary-text": "#fff",
-    "@panel-primary-border": "@brand-primary",
-    "@panel-primary-heading-bg": "@brand-primary",
-    "@panel-success-text": "@state-success-text",
-    "@panel-success-border": "@state-success-border",
-    "@panel-success-heading-bg": "@state-success-bg",
-    "@panel-info-text": "@state-info-text",
-    "@panel-info-border": "@state-info-border",
-    "@panel-info-heading-bg": "@state-info-bg",
-    "@panel-warning-text": "@state-warning-text",
-    "@panel-warning-border": "@state-warning-border",
-    "@panel-warning-heading-bg": "@state-warning-bg",
-    "@panel-danger-text": "@state-danger-text",
-    "@panel-danger-border": "@state-danger-border",
-    "@panel-danger-heading-bg": "@state-danger-bg",
-    "@thumbnail-padding": "4px",
-    "@thumbnail-bg": "@body-bg",
-    "@thumbnail-border": "#ddd",
-    "@thumbnail-border-radius": "@border-radius-base",
-    "@thumbnail-caption-color": "@text-color",
-    "@thumbnail-caption-padding": "9px",
-    "@well-bg": "#f5f5f5",
-    "@well-border": "darken(@well-bg, 7%)",
-    "@badge-color": "#fff",
-    "@badge-link-hover-color": "#fff",
-    "@badge-bg": "@gray-light",
-    "@badge-active-color": "@link-color",
-    "@badge-active-bg": "#fff",
-    "@badge-font-weight": "bold",
-    "@badge-line-height": "1",
-    "@badge-border-radius": "10px",
-    "@breadcrumb-padding-vertical": "8px",
-    "@breadcrumb-padding-horizontal": "15px",
-    "@breadcrumb-bg": "#f5f5f5",
-    "@breadcrumb-color": "#ccc",
-    "@breadcrumb-active-color": "@gray-light",
-    "@breadcrumb-separator": "\"/\"",
-    "@carousel-text-shadow": "0 1px 2px rgba(0,0,0,.6)",
-    "@carousel-control-color": "#fff",
-    "@carousel-control-width": "15%",
-    "@carousel-control-opacity": ".5",
-    "@carousel-control-font-size": "20px",
-    "@carousel-indicator-active-bg": "#fff",
-    "@carousel-indicator-border-color": "#fff",
-    "@carousel-caption-color": "#fff",
-    "@close-font-weight": "bold",
-    "@close-color": "#000",
-    "@close-text-shadow": "0 1px 0 #fff",
-    "@code-color": "#c7254e",
-    "@code-bg": "#f9f2f4",
-    "@kbd-color": "#fff",
-    "@kbd-bg": "#333",
-    "@pre-bg": "#f5f5f5",
-    "@pre-color": "@gray-dark",
-    "@pre-border-color": "#ccc",
-    "@pre-scrollable-max-height": "340px",
-    "@component-offset-horizontal": "180px",
-    "@text-muted": "@gray-light",
-    "@abbr-border-color": "@gray-light",
-    "@headings-small-color": "@gray-light",
-    "@blockquote-small-color": "@gray-light",
-    "@blockquote-font-size": "(@font-size-base * 1.25)",
-    "@blockquote-border-color": "@gray-lighter",
-    "@page-header-border-color": "@gray-lighter",
-    "@dl-horizontal-offset": "@component-offset-horizontal",
-    "@hr-border": "@gray-lighter"
-  },
-  "css": [
-    "print.less",
-    "type.less",
-    "code.less",
-    "grid.less",
-    "tables.less",
-    "forms.less",
-    "buttons.less",
-    "responsive-utilities.less",
-    "glyphicons.less",
-    "button-groups.less",
-    "input-groups.less",
-    "navs.less",
-    "navbar.less",
-    "breadcrumbs.less",
-    "pagination.less",
-    "pager.less",
-    "labels.less",
-    "badges.less",
-    "jumbotron.less",
-    "thumbnails.less",
-    "alerts.less",
-    "progress-bars.less",
-    "media.less",
-    "list-group.less",
-    "panels.less",
-    "responsive-embed.less",
-    "wells.less",
-    "close.less",
-    "component-animations.less",
-    "dropdowns.less",
-    "tooltip.less",
-    "popovers.less",
-    "modals.less",
-    "carousel.less"
-  ],
-  "js": [
-    "alert.js",
-    "button.js",
-    "carousel.js",
-    "dropdown.js",
-    "modal.js",
-    "tooltip.js",
-    "popover.js",
-    "tab.js",
-    "affix.js",
-    "collapse.js",
-    "scrollspy.js",
-    "transition.js"
-  ],
-  "customizerUrl": "http://getbootstrap.com/customize/?id=89be4167cfb1c31170fe"
-}
\ No newline at end of file
diff --git a/gobblin-admin/src/main/resources/static/css/bootstrap.min.css b/gobblin-admin/src/main/resources/static/css/bootstrap.min.css
deleted file mode 100755
index e6390ff..0000000
--- a/gobblin-admin/src/main/resources/static/css/bootstrap.min.css
+++ /dev/null
@@ -1,14 +0,0 @@
-/*!
- * Bootstrap v3.3.5 (http://getbootstrap.com)
- * Copyright 2011-2015 Twitter, Inc.
- * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
- */
-
-/*!
- * Generated using the Bootstrap Customizer (http://getbootstrap.com/customize/?id=89be4167cfb1c31170fe)
- * Config saved to config.json and https://gist.github.com/89be4167cfb1c31170fe
- *//*!
- * Bootstrap v3.3.5 (http://getbootstrap.com)
- * Copyright 2011-2015 Twitter, Inc.
- * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
- *//*! normalize.css v3.0.3 | MIT License | github.com/necolas/normalize.css */html{font-family:sans-serif;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}body{margin:0}article,aside,details,figcaption,figure,footer,header,hgroup,main,menu,nav,section,summary{display:block}audio,canvas,progress,video{display:inline-block;vertical-align:baseline}audio:not([controls]){display:none;height:0}[hidden],template{display:none}a{background-color:transparent}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:bold}dfn{font-style:italic}h1{font-size:2em;margin:0.67em 0}mark{background:#ff0;color:#000}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-0.5em}sub{bottom:-0.25em}img{border:0}svg:not(:root){overflow:hidden}figure{margin:1em 40px}hr{-webkit-box-sizing:content-box;-moz-box-sizing:content-box;box-sizing:content-box;height:0}pre{overflow:auto}code,kbd,pre,samp{font-family:monospace, monospace;font-size:1em}button,input,optgroup,select,textarea{color:inherit;font:inherit;margin:0}button{overflow:visible}button,select{text-transform:none}button,html input[type="button"],input[type="reset"],input[type="submit"]{-webkit-appearance:button;cursor:pointer}button[disabled],html input[disabled]{cursor:default}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}input{line-height:normal}input[type="checkbox"],input[type="radio"]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;padding:0}input[type="number"]::-webkit-inner-spin-button,input[type="number"]::-webkit-outer-spin-button{height:auto}input[type="search"]{-webkit-appearance:textfield;-webkit-box-sizing:content-box;-moz-box-sizing:content-box;box-sizing:content-box}input[type="search"]::-webkit-search-cancel-button,input[type="search"]::-webkit-search-decoration{-webkit-appearance:none}fieldset{border:1px solid #c0c0c0;margin:0 2px;padding:0.35em 0.625em 0.75em}legend{border:0;padding:0}textarea{overflow:auto}optgroup{font-weight:bold}table{border-collapse:collapse;border-spacing:0}td,th{padding:0}/*! Source: https://github.com/h5bp/html5-boilerplate/blob/master/src/css/main.css */@media print{*,*:before,*:after{background:transparent !important;color:#000 !important;-webkit-box-shadow:none !important;box-shadow:none !important;text-shadow:none !important}a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}a[href^="#"]:after,a[href^="javascript:"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}p,h2,h3{orphans:3;widows:3}h2,h3{page-break-after:avoid}.navbar{display:none}.btn>.caret,.dropup>.btn>.caret{border-top-color:#000 !important}.label{border:1px solid #000}.table{border-collapse:collapse !important}.table td,.table th{background-color:#fff !important}.table-bordered th,.table-bordered td{border:1px solid #ddd !important}}@font-face{font-family:'Glyphicons Halflings';src:url('../fonts/glyphicons-halflings-regular.eot');src:url('../fonts/glyphicons-halflings-regular.eot?#iefix') format('embedded-opentype'),url('../fonts/glyphicons-halflings-regular.woff2') format('woff2'),url('../fonts/glyphicons-halflings-regular.woff') format('woff'),url('../fonts/glyphicons-halflings-regular.ttf') format('truetype'),url('../fonts/glyphicons-halflings-regular.svg#glyphicons_halflingsregular') format('svg')}.glyphicon{position:relative;top:1px;display:inline-block;font-family:'Glyphicons Halflings';font-style:normal;font-weight:normal;line-height:1;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.glyphicon-asterisk:before{content:"\2a"}.glyphicon-plus:before{content:"\2b"}.glyphicon-euro:before,.glyphicon-eur:before{content:"\20ac"}.glyphicon-minus:before{content:"\2212"}.glyphicon-cloud:before{content:"\2601"}.glyphicon-envelope:before{content:"\2709"}.glyphicon-pencil:before{content:"\270f"}.glyphicon-glass:before{content:"\e001"}.glyphicon-music:before{content:"\e002"}.glyphicon-search:before{content:"\e003"}.glyphicon-heart:before{content:"\e005"}.glyphicon-star:before{content:"\e006"}.glyphicon-star-empty:before{content:"\e007"}.glyphicon-user:before{content:"\e008"}.glyphicon-film:before{content:"\e009"}.glyphicon-th-large:before{content:"\e010"}.glyphicon-th:before{content:"\e011"}.glyphicon-th-list:before{content:"\e012"}.glyphicon-ok:before{content:"\e013"}.glyphicon-remove:before{content:"\e014"}.glyphicon-zoom-in:before{content:"\e015"}.glyphicon-zoom-out:before{content:"\e016"}.glyphicon-off:before{content:"\e017"}.glyphicon-signal:before{content:"\e018"}.glyphicon-cog:before{content:"\e019"}.glyphicon-trash:before{content:"\e020"}.glyphicon-home:before{content:"\e021"}.glyphicon-file:before{content:"\e022"}.glyphicon-time:before{content:"\e023"}.glyphicon-road:before{content:"\e024"}.glyphicon-download-alt:before{content:"\e025"}.glyphicon-download:before{content:"\e026"}.glyphicon-upload:before{content:"\e027"}.glyphicon-inbox:before{content:"\e028"}.glyphicon-play-circle:before{content:"\e029"}.glyphicon-repeat:before{content:"\e030"}.glyphicon-refresh:before{content:"\e031"}.glyphicon-list-alt:before{content:"\e032"}.glyphicon-lock:before{content:"\e033"}.glyphicon-flag:before{content:"\e034"}.glyphicon-headphones:before{content:"\e035"}.glyphicon-volume-off:before{content:"\e036"}.glyphicon-volume-down:before{content:"\e037"}.glyphicon-volume-up:before{content:"\e038"}.glyphicon-qrcode:before{content:"\e039"}.glyphicon-barcode:before{content:"\e040"}.glyphicon-tag:before{content:"\e041"}.glyphicon-tags:before{content:"\e042"}.glyphicon-book:before{content:"\e043"}.glyphicon-bookmark:before{content:"\e044"}.glyphicon-print:before{content:"\e045"}.glyphicon-camera:before{content:"\e046"}.glyphicon-font:before{content:"\e047"}.glyphicon-bold:before{content:"\e048"}.glyphicon-italic:before{content:"\e049"}.glyphicon-text-height:before{content:"\e050"}.glyphicon-text-width:before{content:"\e051"}.glyphicon-align-left:before{content:"\e052"}.glyphicon-align-center:before{content:"\e053"}.glyphicon-align-right:before{content:"\e054"}.glyphicon-align-justify:before{content:"\e055"}.glyphicon-list:before{content:"\e056"}.glyphicon-indent-left:before{content:"\e057"}.glyphicon-indent-right:before{content:"\e058"}.glyphicon-facetime-video:before{content:"\e059"}.glyphicon-picture:before{content:"\e060"}.glyphicon-map-marker:before{content:"\e062"}.glyphicon-adjust:before{content:"\e063"}.glyphicon-tint:before{content:"\e064"}.glyphicon-edit:before{content:"\e065"}.glyphicon-share:before{content:"\e066"}.glyphicon-check:before{content:"\e067"}.glyphicon-move:before{content:"\e068"}.glyphicon-step-backward:before{content:"\e069"}.glyphicon-fast-backward:before{content:"\e070"}.glyphicon-backward:before{content:"\e071"}.glyphicon-play:before{content:"\e072"}.glyphicon-pause:before{content:"\e073"}.glyphicon-stop:before{content:"\e074"}.glyphicon-forward:before{content:"\e075"}.glyphicon-fast-forward:before{content:"\e076"}.glyphicon-step-forward:before{content:"\e077"}.glyphicon-eject:before{content:"\e078"}.glyphicon-chevron-left:before{content:"\e079"}.glyphicon-chevron-right:before{content:"\e080"}.glyphicon-plus-sign:before{content:"\e081"}.glyphicon-minus-sign:before{content:"\e082"}.glyphicon-remove-sign:before{content:"\e083"}.glyphicon-ok-sign:before{content:"\e084"}.glyphicon-question-sign:before{content:"\e085"}.glyphicon-info-sign:before{content:"\e086"}.glyphicon-screenshot:before{content:"\e087"}.glyphicon-remove-circle:before{content:"\e088"}.glyphicon-ok-circle:before{content:"\e089"}.glyphicon-ban-circle:before{content:"\e090"}.glyphicon-arrow-left:before{content:"\e091"}.glyphicon-arrow-right:before{content:"\e092"}.glyphicon-arrow-up:before{content:"\e093"}.glyphicon-arrow-down:before{content:"\e094"}.glyphicon-share-alt:before{content:"\e095"}.glyphicon-resize-full:before{content:"\e096"}.glyphicon-resize-small:before{content:"\e097"}.glyphicon-exclamation-sign:before{content:"\e101"}.glyphicon-gift:before{content:"\e102"}.glyphicon-leaf:before{content:"\e103"}.glyphicon-fire:before{content:"\e104"}.glyphicon-eye-open:before{content:"\e105"}.glyphicon-eye-close:before{content:"\e106"}.glyphicon-warning-sign:before{content:"\e107"}.glyphicon-plane:before{content:"\e108"}.glyphicon-calendar:before{content:"\e109"}.glyphicon-random:before{content:"\e110"}.glyphicon-comment:before{content:"\e111"}.glyphicon-magnet:before{content:"\e112"}.glyphicon-chevron-up:before{content:"\e113"}.glyphicon-chevron-down:before{content:"\e114"}.glyphicon-retweet:before{content:"\e115"}.glyphicon-shopping-cart:before{content:"\e116"}.glyphicon-folder-close:before{content:"\e117"}.glyphicon-folder-open:before{content:"\e118"}.glyphicon-resize-vertical:before{content:"\e119"}.glyphicon-resize-horizontal:before{content:"\e120"}.glyphicon-hdd:before{content:"\e121"}.glyphicon-bullhorn:before{content:"\e122"}.glyphicon-bell:before{content:"\e123"}.glyphicon-certificate:before{content:"\e124"}.glyphicon-thumbs-up:before{content:"\e125"}.glyphicon-thumbs-down:before{content:"\e126"}.glyphicon-hand-right:before{content:"\e127"}.glyphicon-hand-left:before{content:"\e128"}.glyphicon-hand-up:before{content:"\e129"}.glyphicon-hand-down:before{content:"\e130"}.glyphicon-circle-arrow-right:before{content:"\e131"}.glyphicon-circle-arrow-left:before{content:"\e132"}.glyphicon-circle-arrow-up:before{content:"\e133"}.glyphicon-circle-arrow-down:before{content:"\e134"}.glyphicon-globe:before{content:"\e135"}.glyphicon-wrench:before{content:"\e136"}.glyphicon-tasks:before{content:"\e137"}.glyphicon-filter:before{content:"\e138"}.glyphicon-briefcase:before{content:"\e139"}.glyphicon-fullscreen:before{content:"\e140"}.glyphicon-dashboard:before{content:"\e141"}.glyphicon-paperclip:before{content:"\e142"}.glyphicon-heart-empty:before{content:"\e143"}.glyphicon-link:before{content:"\e144"}.glyphicon-phone:before{content:"\e145"}.glyphicon-pushpin:before{content:"\e146"}.glyphicon-usd:before{content:"\e148"}.glyphicon-gbp:before{content:"\e149"}.glyphicon-sort:before{content:"\e150"}.glyphicon-sort-by-alphabet:before{content:"\e151"}.glyphicon-sort-by-alphabet-alt:before{content:"\e152"}.glyphicon-sort-by-order:before{content:"\e153"}.glyphicon-sort-by-order-alt:before{content:"\e154"}.glyphicon-sort-by-attributes:before{content:"\e155"}.glyphicon-sort-by-attributes-alt:before{content:"\e156"}.glyphicon-unchecked:before{content:"\e157"}.glyphicon-expand:before{content:"\e158"}.glyphicon-collapse-down:before{content:"\e159"}.glyphicon-collapse-up:before{content:"\e160"}.glyphicon-log-in:before{content:"\e161"}.glyphicon-flash:before{content:"\e162"}.glyphicon-log-out:before{content:"\e163"}.glyphicon-new-window:before{content:"\e164"}.glyphicon-record:before{content:"\e165"}.glyphicon-save:before{content:"\e166"}.glyphicon-open:before{content:"\e167"}.glyphicon-saved:before{content:"\e168"}.glyphicon-import:before{content:"\e169"}.glyphicon-export:before{content:"\e170"}.glyphicon-send:before{content:"\e171"}.glyphicon-floppy-disk:before{content:"\e172"}.glyphicon-floppy-saved:before{content:"\e173"}.glyphicon-floppy-remove:before{content:"\e174"}.glyphicon-floppy-save:before{content:"\e175"}.glyphicon-floppy-open:before{content:"\e176"}.glyphicon-credit-card:before{content:"\e177"}.glyphicon-transfer:before{content:"\e178"}.glyphicon-cutlery:before{content:"\e179"}.glyphicon-header:before{content:"\e180"}.glyphicon-compressed:before{content:"\e181"}.glyphicon-earphone:before{content:"\e182"}.glyphicon-phone-alt:before{content:"\e183"}.glyphicon-tower:before{content:"\e184"}.glyphicon-stats:before{content:"\e185"}.glyphicon-sd-video:before{content:"\e186"}.glyphicon-hd-video:before{content:"\e187"}.glyphicon-subtitles:before{content:"\e188"}.glyphicon-sound-stereo:before{content:"\e189"}.glyphicon-sound-dolby:before{content:"\e190"}.glyphicon-sound-5-1:before{content:"\e191"}.glyphicon-sound-6-1:before{content:"\e192"}.glyphicon-sound-7-1:before{content:"\e193"}.glyphicon-copyright-mark:before{content:"\e194"}.glyphicon-registration-mark:before{content:"\e195"}.glyphicon-cloud-download:before{content:"\e197"}.glyphicon-cloud-upload:before{content:"\e198"}.glyphicon-tree-conifer:before{content:"\e199"}.glyphicon-tree-deciduous:before{content:"\e200"}.glyphicon-cd:before{content:"\e201"}.glyphicon-save-file:before{content:"\e202"}.glyphicon-open-file:before{content:"\e203"}.glyphicon-level-up:before{content:"\e204"}.glyphicon-copy:before{content:"\e205"}.glyphicon-paste:before{content:"\e206"}.glyphicon-alert:before{content:"\e209"}.glyphicon-equalizer:before{content:"\e210"}.glyphicon-king:before{content:"\e211"}.glyphicon-queen:before{content:"\e212"}.glyphicon-pawn:before{content:"\e213"}.glyphicon-bishop:before{content:"\e214"}.glyphicon-knight:before{content:"\e215"}.glyphicon-baby-formula:before{content:"\e216"}.glyphicon-tent:before{content:"\26fa"}.glyphicon-blackboard:before{content:"\e218"}.glyphicon-bed:before{content:"\e219"}.glyphicon-apple:before{content:"\f8ff"}.glyphicon-erase:before{content:"\e221"}.glyphicon-hourglass:before{content:"\231b"}.glyphicon-lamp:before{content:"\e223"}.glyphicon-duplicate:before{content:"\e224"}.glyphicon-piggy-bank:before{content:"\e225"}.glyphicon-scissors:before{content:"\e226"}.glyphicon-bitcoin:before{content:"\e227"}.glyphicon-btc:before{content:"\e227"}.glyphicon-xbt:before{content:"\e227"}.glyphicon-yen:before{content:"\00a5"}.glyphicon-jpy:before{content:"\00a5"}.glyphicon-ruble:before{content:"\20bd"}.glyphicon-rub:before{content:"\20bd"}.glyphicon-scale:before{content:"\e230"}.glyphicon-ice-lolly:before{content:"\e231"}.glyphicon-ice-lolly-tasted:before{content:"\e232"}.glyphicon-education:before{content:"\e233"}.glyphicon-option-horizontal:before{content:"\e234"}.glyphicon-option-vertical:before{content:"\e235"}.glyphicon-menu-hamburger:before{content:"\e236"}.glyphicon-modal-window:before{content:"\e237"}.glyphicon-oil:before{content:"\e238"}.glyphicon-grain:before{content:"\e239"}.glyphicon-sunglasses:before{content:"\e240"}.glyphicon-text-size:before{content:"\e241"}.glyphicon-text-color:before{content:"\e242"}.glyphicon-text-background:before{content:"\e243"}.glyphicon-object-align-top:before{content:"\e244"}.glyphicon-object-align-bottom:before{content:"\e245"}.glyphicon-object-align-horizontal:before{content:"\e246"}.glyphicon-object-align-left:before{content:"\e247"}.glyphicon-object-align-vertical:before{content:"\e248"}.glyphicon-object-align-right:before{content:"\e249"}.glyphicon-triangle-right:before{content:"\e250"}.glyphicon-triangle-left:before{content:"\e251"}.glyphicon-triangle-bottom:before{content:"\e252"}.glyphicon-triangle-top:before{content:"\e253"}.glyphicon-console:before{content:"\e254"}.glyphicon-superscript:before{content:"\e255"}.glyphicon-subscript:before{content:"\e256"}.glyphicon-menu-left:before{content:"\e257"}.glyphicon-menu-right:before{content:"\e258"}.glyphicon-menu-down:before{content:"\e259"}.glyphicon-menu-up:before{content:"\e260"}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}*:before,*:after{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:10px;-webkit-tap-highlight-color:rgba(0,0,0,0)}body{font-family:"Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;font-size:14px;line-height:1.42857143;color:#333;background-color:#fafafa}input,button,select,textarea{font-family:inherit;font-size:inherit;line-height:inherit}a{color:#2c3a80;text-decoration:none}a:hover,a:focus{color:#25316d;text-decoration:underline}a:focus{outline:thin dotted;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px}figure{margin:0}img{vertical-align:middle}.img-responsive,.thumbnail>img,.thumbnail a>img,.carousel-inner>.item>img,.carousel-inner>.item>a>img{display:block;max-width:100%;height:auto}.img-rounded{border-radius:12px}.img-thumbnail{padding:4px;line-height:1.42857143;background-color:#fafafa;border:1px solid #ddd;border-radius:10px;-webkit-transition:all .2s ease-in-out;-o-transition:all .2s ease-in-out;transition:all .2s ease-in-out;display:inline-block;max-width:100%;height:auto}.img-circle{border-radius:50%}hr{margin-top:20px;margin-bottom:20px;border:0;border-top:1px solid #eee}.sr-only{position:absolute;width:1px;height:1px;margin:-1px;padding:0;overflow:hidden;clip:rect(0, 0, 0, 0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}[role="button"]{cursor:pointer}h1,h2,h3,h4,h5,h6,.h1,.h2,.h3,.h4,.h5,.h6{font-family:Montserrat,"Helvetica Neue",Helvetica,Arial,sans-serif;font-weight:500;line-height:1.1;color:inherit}h1 small,h2 small,h3 small,h4 small,h5 small,h6 small,.h1 small,.h2 small,.h3 small,.h4 small,.h5 small,.h6 small,h1 .small,h2 .small,h3 .small,h4 .small,h5 .small,h6 .small,.h1 .small,.h2 .small,.h3 .small,.h4 .small,.h5 .small,.h6 .small{font-weight:normal;line-height:1;color:#777}h1,.h1,h2,.h2,h3,.h3{margin-top:20px;margin-bottom:10px}h1 small,.h1 small,h2 small,.h2 small,h3 small,.h3 small,h1 .small,.h1 .small,h2 .small,.h2 .small,h3 .small,.h3 .small{font-size:65%}h4,.h4,h5,.h5,h6,.h6{margin-top:10px;margin-bottom:10px}h4 small,.h4 small,h5 small,.h5 small,h6 small,.h6 small,h4 .small,.h4 .small,h5 .small,.h5 .small,h6 .small,.h6 .small{font-size:75%}h1,.h1{font-size:36px}h2,.h2{font-size:30px}h3,.h3{font-size:24px}h4,.h4{font-size:18px}h5,.h5{font-size:14px}h6,.h6{font-size:12px}p{margin:0 0 10px}.lead{margin-bottom:20px;font-size:16px;font-weight:300;line-height:1.4}@media (min-width:768px){.lead{font-size:21px}}small,.small{font-size:85%}mark,.mark{background-color:#ffe9d4;padding:.2em}.text-left{text-align:left}.text-right{text-align:right}.text-center{text-align:center}.text-justify{text-align:justify}.text-nowrap{white-space:nowrap}.text-lowercase{text-transform:lowercase}.text-uppercase{text-transform:uppercase}.text-capitalize{text-transform:capitalize}.text-muted{color:#777}.text-primary{color:#ffc700}a.text-primary:hover,a.text-primary:focus{color:#cc9f00}.text-success{color:#159876}a.text-success:hover,a.text-success:focus{color:#0f6b53}.text-info{color:#2c3a80}a.text-info:hover,a.text-info:focus{color:#1f295a}.text-warning{color:#fd820a}a.text-warning:hover,a.text-warning:focus{color:#d26902}.text-danger{color:#eb172e}a.text-danger:hover,a.text-danger:focus{color:#bf1023}.bg-primary{color:#fff;background-color:#ffc700}a.bg-primary:hover,a.bg-primary:focus{background-color:#cc9f00}.bg-success{background-color:#cdf8ed}a.bg-success:hover,a.bg-success:focus{background-color:#a1f2dd}.bg-info{background-color:#e7eaf7}a.bg-info:hover,a.bg-info:focus{background-color:#c1c8ea}.bg-warning{background-color:#ffe9d4}a.bg-warning:hover,a.bg-warning:focus{background-color:#fecfa2}.bg-danger{background-color:#fbd3d7}a.bg-danger:hover,a.bg-danger:focus{background-color:#f7a4ad}.page-header{padding-bottom:9px;margin:40px 0 20px;border-bottom:1px solid #eee}ul,ol{margin-top:0;margin-bottom:10px}ul ul,ol ul,ul ol,ol ol{margin-bottom:0}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none;margin-left:-5px}.list-inline>li{display:inline-block;padding-left:5px;padding-right:5px}dl{margin-top:0;margin-bottom:20px}dt,dd{line-height:1.42857143}dt{font-weight:bold}dd{margin-left:0}@media (min-width:768px){.dl-horizontal dt{float:left;width:160px;clear:left;text-align:right;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.dl-horizontal dd{margin-left:180px}}abbr[title],abbr[data-original-title]{cursor:help;border-bottom:1px dotted #777}.initialism{font-size:90%;text-transform:uppercase}blockquote{padding:10px 20px;margin:0 0 20px;font-size:17.5px;border-left:5px solid #eee}blockquote p:last-child,blockquote ul:last-child,blockquote ol:last-child{margin-bottom:0}blockquote footer,blockquote small,blockquote .small{display:block;font-size:80%;line-height:1.42857143;color:#777}blockquote footer:before,blockquote small:before,blockquote .small:before{content:'\2014 \00A0'}.blockquote-reverse,blockquote.pull-right{padding-right:15px;padding-left:0;border-right:5px solid #eee;border-left:0;text-align:right}.blockquote-reverse footer:before,blockquote.pull-right footer:before,.blockquote-reverse small:before,blockquote.pull-right small:before,.blockquote-reverse .small:before,blockquote.pull-right .small:before{content:''}.blockquote-reverse footer:after,blockquote.pull-right footer:after,.blockquote-reverse small:after,blockquote.pull-right small:after,.blockquote-reverse .small:after,blockquote.pull-right .small:after{content:'\00A0 \2014'}address{margin-bottom:20px;font-style:normal;line-height:1.42857143}code,kbd,pre,samp{font-family:Menlo,Monaco,Consolas,"Courier New",monospace}code{padding:2px 4px;font-size:90%;color:#c7254e;background-color:#f9f2f4;border-radius:10px}kbd{padding:2px 4px;font-size:90%;color:#fff;background-color:#333;border-radius:8px;-webkit-box-shadow:inset 0 -1px 0 rgba(0,0,0,0.25);box-shadow:inset 0 -1px 0 rgba(0,0,0,0.25)}kbd kbd{padding:0;font-size:100%;font-weight:bold;-webkit-box-shadow:none;box-shadow:none}pre{display:block;padding:9.5px;margin:0 0 10px;font-size:13px;line-height:1.42857143;word-break:break-all;word-wrap:break-word;color:#333;background-color:#f5f5f5;border:1px solid #ccc;border-radius:10px}pre code{padding:0;font-size:inherit;color:inherit;white-space:pre-wrap;background-color:transparent;border-radius:0}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{margin-right:auto;margin-left:auto;padding-left:15px;padding-right:15px}@media (min-width:768px){.container{width:750px}}@media (min-width:992px){.container{width:970px}}@media (min-width:1200px){.container{width:1170px}}.container-fluid{margin-right:auto;margin-left:auto;padding-left:15px;padding-right:15px}.row{margin-left:-15px;margin-right:-15px}.col-xs-1, .col-sm-1, .col-md-1, .col-lg-1, .col-xs-2, .col-sm-2, .col-md-2, .col-lg-2, .col-xs-3, .col-sm-3, .col-md-3, .col-lg-3, .col-xs-4, .col-sm-4, .col-md-4, .col-lg-4, .col-xs-5, .col-sm-5, .col-md-5, .col-lg-5, .col-xs-6, .col-sm-6, .col-md-6, .col-lg-6, .col-xs-7, .col-sm-7, .col-md-7, .col-lg-7, .col-xs-8, .col-sm-8, .col-md-8, .col-lg-8, .col-xs-9, .col-sm-9, .col-md-9, .col-lg-9, .col-xs-10, .col-sm-10, .col-md-10, .col-lg-10, .col-xs-11, .col-sm-11, .col-md-11, .col-lg-11, .col-xs-12, .col-sm-12, .col-md-12, .col-lg-12{position:relative;min-height:1px;padding-left:15px;padding-right:15px}.col-xs-1, .col-xs-2, .col-xs-3, .col-xs-4, .col-xs-5, .col-xs-6, .col-xs-7, .col-xs-8, .col-xs-9, .col-xs-10, .col-xs-11, .col-xs-12{float:left}.col-xs-12{width:100%}.col-xs-11{width:91.66666667%}.col-xs-10{width:83.33333333%}.col-xs-9{width:75%}.col-xs-8{width:66.66666667%}.col-xs-7{width:58.33333333%}.col-xs-6{width:50%}.col-xs-5{width:41.66666667%}.col-xs-4{width:33.33333333%}.col-xs-3{width:25%}.col-xs-2{width:16.66666667%}.col-xs-1{width:8.33333333%}.col-xs-pull-12{right:100%}.col-xs-pull-11{right:91.66666667%}.col-xs-pull-10{right:83.33333333%}.col-xs-pull-9{right:75%}.col-xs-pull-8{right:66.66666667%}.col-xs-pull-7{right:58.33333333%}.col-xs-pull-6{right:50%}.col-xs-pull-5{right:41.66666667%}.col-xs-pull-4{right:33.33333333%}.col-xs-pull-3{right:25%}.col-xs-pull-2{right:16.66666667%}.col-xs-pull-1{right:8.33333333%}.col-xs-pull-0{right:auto}.col-xs-push-12{left:100%}.col-xs-push-11{left:91.66666667%}.col-xs-push-10{left:83.33333333%}.col-xs-push-9{left:75%}.col-xs-push-8{left:66.66666667%}.col-xs-push-7{left:58.33333333%}.col-xs-push-6{left:50%}.col-xs-push-5{left:41.66666667%}.col-xs-push-4{left:33.33333333%}.col-xs-push-3{left:25%}.col-xs-push-2{left:16.66666667%}.col-xs-push-1{left:8.33333333%}.col-xs-push-0{left:auto}.col-xs-offset-12{margin-left:100%}.col-xs-offset-11{margin-left:91.66666667%}.col-xs-offset-10{margin-left:83.33333333%}.col-xs-offset-9{margin-left:75%}.col-xs-offset-8{margin-left:66.66666667%}.col-xs-offset-7{margin-left:58.33333333%}.col-xs-offset-6{margin-left:50%}.col-xs-offset-5{margin-left:41.66666667%}.col-xs-offset-4{margin-left:33.33333333%}.col-xs-offset-3{margin-left:25%}.col-xs-offset-2{margin-left:16.66666667%}.col-xs-offset-1{margin-left:8.33333333%}.col-xs-offset-0{margin-left:0}@media (min-width:768px){.col-sm-1, .col-sm-2, .col-sm-3, .col-sm-4, .col-sm-5, .col-sm-6, .col-sm-7, .col-sm-8, .col-sm-9, .col-sm-10, .col-sm-11, .col-sm-12{float:left}.col-sm-12{width:100%}.col-sm-11{width:91.66666667%}.col-sm-10{width:83.33333333%}.col-sm-9{width:75%}.col-sm-8{width:66.66666667%}.col-sm-7{width:58.33333333%}.col-sm-6{width:50%}.col-sm-5{width:41.66666667%}.col-sm-4{width:33.33333333%}.col-sm-3{width:25%}.col-sm-2{width:16.66666667%}.col-sm-1{width:8.33333333%}.col-sm-pull-12{right:100%}.col-sm-pull-11{right:91.66666667%}.col-sm-pull-10{right:83.33333333%}.col-sm-pull-9{right:75%}.col-sm-pull-8{right:66.66666667%}.col-sm-pull-7{right:58.33333333%}.col-sm-pull-6{right:50%}.col-sm-pull-5{right:41.66666667%}.col-sm-pull-4{right:33.33333333%}.col-sm-pull-3{right:25%}.col-sm-pull-2{right:16.66666667%}.col-sm-pull-1{right:8.33333333%}.col-sm-pull-0{right:auto}.col-sm-push-12{left:100%}.col-sm-push-11{left:91.66666667%}.col-sm-push-10{left:83.33333333%}.col-sm-push-9{left:75%}.col-sm-push-8{left:66.66666667%}.col-sm-push-7{left:58.33333333%}.col-sm-push-6{left:50%}.col-sm-push-5{left:41.66666667%}.col-sm-push-4{left:33.33333333%}.col-sm-push-3{left:25%}.col-sm-push-2{left:16.66666667%}.col-sm-push-1{left:8.33333333%}.col-sm-push-0{left:auto}.col-sm-offset-12{margin-left:100%}.col-sm-offset-11{margin-left:91.66666667%}.col-sm-offset-10{margin-left:83.33333333%}.col-sm-offset-9{margin-left:75%}.col-sm-offset-8{margin-left:66.66666667%}.col-sm-offset-7{margin-left:58.33333333%}.col-sm-offset-6{margin-left:50%}.col-sm-offset-5{margin-left:41.66666667%}.col-sm-offset-4{margin-left:33.33333333%}.col-sm-offset-3{margin-left:25%}.col-sm-offset-2{margin-left:16.66666667%}.col-sm-offset-1{margin-left:8.33333333%}.col-sm-offset-0{margin-left:0}}@media (min-width:992px){.col-md-1, .col-md-2, .col-md-3, .col-md-4, .col-md-5, .col-md-6, .col-md-7, .col-md-8, .col-md-9, .col-md-10, .col-md-11, .col-md-12{float:left}.col-md-12{width:100%}.col-md-11{width:91.66666667%}.col-md-10{width:83.33333333%}.col-md-9{width:75%}.col-md-8{width:66.66666667%}.col-md-7{width:58.33333333%}.col-md-6{width:50%}.col-md-5{width:41.66666667%}.col-md-4{width:33.33333333%}.col-md-3{width:25%}.col-md-2{width:16.66666667%}.col-md-1{width:8.33333333%}.col-md-pull-12{right:100%}.col-md-pull-11{right:91.66666667%}.col-md-pull-10{right:83.33333333%}.col-md-pull-9{right:75%}.col-md-pull-8{right:66.66666667%}.col-md-pull-7{right:58.33333333%}.col-md-pull-6{right:50%}.col-md-pull-5{right:41.66666667%}.col-md-pull-4{right:33.33333333%}.col-md-pull-3{right:25%}.col-md-pull-2{right:16.66666667%}.col-md-pull-1{right:8.33333333%}.col-md-pull-0{right:auto}.col-md-push-12{left:100%}.col-md-push-11{left:91.66666667%}.col-md-push-10{left:83.33333333%}.col-md-push-9{left:75%}.col-md-push-8{left:66.66666667%}.col-md-push-7{left:58.33333333%}.col-md-push-6{left:50%}.col-md-push-5{left:41.66666667%}.col-md-push-4{left:33.33333333%}.col-md-push-3{left:25%}.col-md-push-2{left:16.66666667%}.col-md-push-1{left:8.33333333%}.col-md-push-0{left:auto}.col-md-offset-12{margin-left:100%}.col-md-offset-11{margin-left:91.66666667%}.col-md-offset-10{margin-left:83.33333333%}.col-md-offset-9{margin-left:75%}.col-md-offset-8{margin-left:66.66666667%}.col-md-offset-7{margin-left:58.33333333%}.col-md-offset-6{margin-left:50%}.col-md-offset-5{margin-left:41.66666667%}.col-md-offset-4{margin-left:33.33333333%}.col-md-offset-3{margin-left:25%}.col-md-offset-2{margin-left:16.66666667%}.col-md-offset-1{margin-left:8.33333333%}.col-md-offset-0{margin-left:0}}@media (min-width:1200px){.col-lg-1, .col-lg-2, .col-lg-3, .col-lg-4, .col-lg-5, .col-lg-6, .col-lg-7, .col-lg-8, .col-lg-9, .col-lg-10, .col-lg-11, .col-lg-12{float:left}.col-lg-12{width:100%}.col-lg-11{width:91.66666667%}.col-lg-10{width:83.33333333%}.col-lg-9{width:75%}.col-lg-8{width:66.66666667%}.col-lg-7{width:58.33333333%}.col-lg-6{width:50%}.col-lg-5{width:41.66666667%}.col-lg-4{width:33.33333333%}.col-lg-3{width:25%}.col-lg-2{width:16.66666667%}.col-lg-1{width:8.33333333%}.col-lg-pull-12{right:100%}.col-lg-pull-11{right:91.66666667%}.col-lg-pull-10{right:83.33333333%}.col-lg-pull-9{right:75%}.col-lg-pull-8{right:66.66666667%}.col-lg-pull-7{right:58.33333333%}.col-lg-pull-6{right:50%}.col-lg-pull-5{right:41.66666667%}.col-lg-pull-4{right:33.33333333%}.col-lg-pull-3{right:25%}.col-lg-pull-2{right:16.66666667%}.col-lg-pull-1{right:8.33333333%}.col-lg-pull-0{right:auto}.col-lg-push-12{left:100%}.col-lg-push-11{left:91.66666667%}.col-lg-push-10{left:83.33333333%}.col-lg-push-9{left:75%}.col-lg-push-8{left:66.66666667%}.col-lg-push-7{left:58.33333333%}.col-lg-push-6{left:50%}.col-lg-push-5{left:41.66666667%}.col-lg-push-4{left:33.33333333%}.col-lg-push-3{left:25%}.col-lg-push-2{left:16.66666667%}.col-lg-push-1{left:8.33333333%}.col-lg-push-0{left:auto}.col-lg-offset-12{margin-left:100%}.col-lg-offset-11{margin-left:91.66666667%}.col-lg-offset-10{margin-left:83.33333333%}.col-lg-offset-9{margin-left:75%}.col-lg-offset-8{margin-left:66.66666667%}.col-lg-offset-7{margin-left:58.33333333%}.col-lg-offset-6{margin-left:50%}.col-lg-offset-5{margin-left:41.66666667%}.col-lg-offset-4{margin-left:33.33333333%}.col-lg-offset-3{margin-left:25%}.col-lg-offset-2{margin-left:16.66666667%}.col-lg-offset-1{margin-left:8.33333333%}.col-lg-offset-0{margin-left:0}}table{background-color:transparent}caption{padding-top:8px;padding-bottom:8px;color:#777;text-align:left}th{text-align:left}.table{width:100%;max-width:100%;margin-bottom:20px}.table>thead>tr>th,.table>tbody>tr>th,.table>tfoot>tr>th,.table>thead>tr>td,.table>tbody>tr>td,.table>tfoot>tr>td{padding:8px;line-height:1.42857143;vertical-align:top;border-top:1px solid #ddd}.table>thead>tr>th{vertical-align:bottom;border-bottom:2px solid #ddd}.table>caption+thead>tr:first-child>th,.table>colgroup+thead>tr:first-child>th,.table>thead:first-child>tr:first-child>th,.table>caption+thead>tr:first-child>td,.table>colgroup+thead>tr:first-child>td,.table>thead:first-child>tr:first-child>td{border-top:0}.table>tbody+tbody{border-top:2px solid #ddd}.table .table{background-color:#fafafa}.table-condensed>thead>tr>th,.table-condensed>tbody>tr>th,.table-condensed>tfoot>tr>th,.table-condensed>thead>tr>td,.table-condensed>tbody>tr>td,.table-condensed>tfoot>tr>td{padding:5px}.table-bordered{border:1px solid #ddd}.table-bordered>thead>tr>th,.table-bordered>tbody>tr>th,.table-bordered>tfoot>tr>th,.table-bordered>thead>tr>td,.table-bordered>tbody>tr>td,.table-bordered>tfoot>tr>td{border:1px solid #ddd}.table-bordered>thead>tr>th,.table-bordered>thead>tr>td{border-bottom-width:2px}.table-striped>tbody>tr:nth-of-type(odd){background-color:#f9f9f9}.table-hover>tbody>tr:hover{background-color:#f5f5f5}table col[class*="col-"]{position:static;float:none;display:table-column}table td[class*="col-"],table th[class*="col-"]{position:static;float:none;display:table-cell}.table>thead>tr>td.active,.table>tbody>tr>td.active,.table>tfoot>tr>td.active,.table>thead>tr>th.active,.table>tbody>tr>th.active,.table>tfoot>tr>th.active,.table>thead>tr.active>td,.table>tbody>tr.active>td,.table>tfoot>tr.active>td,.table>thead>tr.active>th,.table>tbody>tr.active>th,.table>tfoot>tr.active>th{background-color:#f5f5f5}.table-hover>tbody>tr>td.active:hover,.table-hover>tbody>tr>th.active:hover,.table-hover>tbody>tr.active:hover>td,.table-hover>tbody>tr:hover>.active,.table-hover>tbody>tr.active:hover>th{background-color:#e8e8e8}.table>thead>tr>td.success,.table>tbody>tr>td.success,.table>tfoot>tr>td.success,.table>thead>tr>th.success,.table>tbody>tr>th.success,.table>tfoot>tr>th.success,.table>thead>tr.success>td,.table>tbody>tr.success>td,.table>tfoot>tr.success>td,.table>thead>tr.success>th,.table>tbody>tr.success>th,.table>tfoot>tr.success>th{background-color:#cdf8ed}.table-hover>tbody>tr>td.success:hover,.table-hover>tbody>tr>th.success:hover,.table-hover>tbody>tr.success:hover>td,.table-hover>tbody>tr:hover>.success,.table-hover>tbody>tr.success:hover>th{background-color:#b7f5e5}.table>thead>tr>td.info,.table>tbody>tr>td.info,.table>tfoot>tr>td.info,.table>thead>tr>th.info,.table>tbody>tr>th.info,.table>tfoot>tr>th.info,.table>thead>tr.info>td,.table>tbody>tr.info>td,.table>tfoot>tr.info>td,.table>thead>tr.info>th,.table>tbody>tr.info>th,.table>tfoot>tr.info>th{background-color:#e7eaf7}.table-hover>tbody>tr>td.info:hover,.table-hover>tbody>tr>th.info:hover,.table-hover>tbody>tr.info:hover>td,.table-hover>tbody>tr:hover>.info,.table-hover>tbody>tr.info:hover>th{background-color:#d4d9f0}.table>thead>tr>td.warning,.table>tbody>tr>td.warning,.table>tfoot>tr>td.warning,.table>thead>tr>th.warning,.table>tbody>tr>th.warning,.table>tfoot>tr>th.warning,.table>thead>tr.warning>td,.table>tbody>tr.warning>td,.table>tfoot>tr.warning>td,.table>thead>tr.warning>th,.table>tbody>tr.warning>th,.table>tfoot>tr.warning>th{background-color:#ffe9d4}.table-hover>tbody>tr>td.warning:hover,.table-hover>tbody>tr>th.warning:hover,.table-hover>tbody>tr.warning:hover>td,.table-hover>tbody>tr:hover>.warning,.table-hover>tbody>tr.warning:hover>th{background-color:#fedcbb}.table>thead>tr>td.danger,.table>tbody>tr>td.danger,.table>tfoot>tr>td.danger,.table>thead>tr>th.danger,.table>tbody>tr>th.danger,.table>tfoot>tr>th.danger,.table>thead>tr.danger>td,.table>tbody>tr.danger>td,.table>tfoot>tr.danger>td,.table>thead>tr.danger>th,.table>tbody>tr.danger>th,.table>tfoot>tr.danger>th{background-color:#fbd3d7}.table-hover>tbody>tr>td.danger:hover,.table-hover>tbody>tr>th.danger:hover,.table-hover>tbody>tr.danger:hover>td,.table-hover>tbody>tr:hover>.danger,.table-hover>tbody>tr.danger:hover>th{background-color:#f9bbc2}.table-responsive{overflow-x:auto;min-height:0.01%}@media screen and (max-width:767px){.table-responsive{width:100%;margin-bottom:15px;overflow-y:hidden;-ms-overflow-style:-ms-autohiding-scrollbar;border:1px solid #ddd}.table-responsive>.table{margin-bottom:0}.table-responsive>.table>thead>tr>th,.table-responsive>.table>tbody>tr>th,.table-responsive>.table>tfoot>tr>th,.table-responsive>.table>thead>tr>td,.table-responsive>.table>tbody>tr>td,.table-responsive>.table>tfoot>tr>td{white-space:nowrap}.table-responsive>.table-bordered{border:0}.table-responsive>.table-bordered>thead>tr>th:first-child,.table-responsive>.table-bordered>tbody>tr>th:first-child,.table-responsive>.table-bordered>tfoot>tr>th:first-child,.table-responsive>.table-bordered>thead>tr>td:first-child,.table-responsive>.table-bordered>tbody>tr>td:first-child,.table-responsive>.table-bordered>tfoot>tr>td:first-child{border-left:0}.table-responsive>.table-bordered>thead>tr>th:last-child,.table-responsive>.table-bordered>tbody>tr>th:last-child,.table-responsive>.table-bordered>tfoot>tr>th:last-child,.table-responsive>.table-bordered>thead>tr>td:last-child,.table-responsive>.table-bordered>tbody>tr>td:last-child,.table-responsive>.table-bordered>tfoot>tr>td:last-child{border-right:0}.table-responsive>.table-bordered>tbody>tr:last-child>th,.table-responsive>.table-bordered>tfoot>tr:last-child>th,.table-responsive>.table-bordered>tbody>tr:last-child>td,.table-responsive>.table-bordered>tfoot>tr:last-child>td{border-bottom:0}}fieldset{padding:0;margin:0;border:0;min-width:0}legend{display:block;width:100%;padding:0;margin-bottom:20px;font-size:21px;line-height:inherit;color:#333;border:0;border-bottom:1px solid #e5e5e5}label{display:inline-block;max-width:100%;margin-bottom:5px;font-weight:bold}input[type="search"]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type="radio"],input[type="checkbox"]{margin:4px 0 0;margin-top:1px \9;line-height:normal}input[type="file"]{display:block}input[type="range"]{display:block;width:100%}select[multiple],select[size]{height:auto}input[type="file"]:focus,input[type="radio"]:focus,input[type="checkbox"]:focus{outline:thin dotted;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px}output{display:block;padding-top:7px;font-size:14px;line-height:1.42857143;color:#555}.form-control{display:block;width:100%;height:34px;padding:6px 12px;font-size:14px;line-height:1.42857143;color:#555;background-color:#fff;background-image:none;border:1px solid #ccc;border-radius:10px;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);-webkit-transition:border-color ease-in-out .15s, -webkit-box-shadow ease-in-out .15s;-o-transition:border-color ease-in-out .15s, box-shadow ease-in-out .15s;transition:border-color ease-in-out .15s, box-shadow ease-in-out .15s}.form-control:focus{border-color:#66afe9;outline:0;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px rgba(102, 175, 233, 0.6);box-shadow:inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px rgba(102, 175, 233, 0.6)}.form-control::-moz-placeholder{color:#999;opacity:1}.form-control:-ms-input-placeholder{color:#999}.form-control::-webkit-input-placeholder{color:#999}.form-control[disabled],.form-control[readonly],fieldset[disabled] .form-control{background-color:#eee;opacity:1}.form-control[disabled],fieldset[disabled] .form-control{cursor:not-allowed}textarea.form-control{height:auto}input[type="search"]{-webkit-appearance:none}@media screen and (-webkit-min-device-pixel-ratio:0){input[type="date"].form-control,input[type="time"].form-control,input[type="datetime-local"].form-control,input[type="month"].form-control{line-height:34px}input[type="date"].input-sm,input[type="time"].input-sm,input[type="datetime-local"].input-sm,input[type="month"].input-sm,.input-group-sm input[type="date"],.input-group-sm input[type="time"],.input-group-sm input[type="datetime-local"],.input-group-sm input[type="month"]{line-height:30px}input[type="date"].input-lg,input[type="time"].input-lg,input[type="datetime-local"].input-lg,input[type="month"].input-lg,.input-group-lg input[type="date"],.input-group-lg input[type="time"],.input-group-lg input[type="datetime-local"],.input-group-lg input[type="month"]{line-height:46px}}.form-group{margin-bottom:15px}.radio,.checkbox{position:relative;display:block;margin-top:10px;margin-bottom:10px}.radio label,.checkbox label{min-height:20px;padding-left:20px;margin-bottom:0;font-weight:normal;cursor:pointer}.radio input[type="radio"],.radio-inline input[type="radio"],.checkbox input[type="checkbox"],.checkbox-inline input[type="checkbox"]{position:absolute;margin-left:-20px;margin-top:4px \9}.radio+.radio,.checkbox+.checkbox{margin-top:-5px}.radio-inline,.checkbox-inline{position:relative;display:inline-block;padding-left:20px;margin-bottom:0;vertical-align:middle;font-weight:normal;cursor:pointer}.radio-inline+.radio-inline,.checkbox-inline+.checkbox-inline{margin-top:0;margin-left:10px}input[type="radio"][disabled],input[type="checkbox"][disabled],input[type="radio"].disabled,input[type="checkbox"].disabled,fieldset[disabled] input[type="radio"],fieldset[disabled] input[type="checkbox"]{cursor:not-allowed}.radio-inline.disabled,.checkbox-inline.disabled,fieldset[disabled] .radio-inline,fieldset[disabled] .checkbox-inline{cursor:not-allowed}.radio.disabled label,.checkbox.disabled label,fieldset[disabled] .radio label,fieldset[disabled] .checkbox label{cursor:not-allowed}.form-control-static{padding-top:7px;padding-bottom:7px;margin-bottom:0;min-height:34px}.form-control-static.input-lg,.form-control-static.input-sm{padding-left:0;padding-right:0}.input-sm{height:30px;padding:5px 10px;font-size:12px;line-height:1.5;border-radius:8px}select.input-sm{height:30px;line-height:30px}textarea.input-sm,select[multiple].input-sm{height:auto}.form-group-sm .form-control{height:30px;padding:5px 10px;font-size:12px;line-height:1.5;border-radius:8px}.form-group-sm select.form-control{height:30px;line-height:30px}.form-group-sm textarea.form-control,.form-group-sm select[multiple].form-control{height:auto}.form-group-sm .form-control-static{height:30px;min-height:32px;padding:6px 10px;font-size:12px;line-height:1.5}.input-lg{height:46px;padding:10px 16px;font-size:18px;line-height:1.3333333;border-radius:12px}select.input-lg{height:46px;line-height:46px}textarea.input-lg,select[multiple].input-lg{height:auto}.form-group-lg .form-control{height:46px;padding:10px 16px;font-size:18px;line-height:1.3333333;border-radius:12px}.form-group-lg select.form-control{height:46px;line-height:46px}.form-group-lg textarea.form-control,.form-group-lg select[multiple].form-control{height:auto}.form-group-lg .form-control-static{height:46px;min-height:38px;padding:11px 16px;font-size:18px;line-height:1.3333333}.has-feedback{position:relative}.has-feedback .form-control{padding-right:42.5px}.form-control-feedback{position:absolute;top:0;right:0;z-index:2;display:block;width:34px;height:34px;line-height:34px;text-align:center;pointer-events:none}.input-lg+.form-control-feedback,.input-group-lg+.form-control-feedback,.form-group-lg .form-control+.form-control-feedback{width:46px;height:46px;line-height:46px}.input-sm+.form-control-feedback,.input-group-sm+.form-control-feedback,.form-group-sm .form-control+.form-control-feedback{width:30px;height:30px;line-height:30px}.has-success .help-block,.has-success .control-label,.has-success .radio,.has-success .checkbox,.has-success .radio-inline,.has-success .checkbox-inline,.has-success.radio label,.has-success.checkbox label,.has-success.radio-inline label,.has-success.checkbox-inline label{color:#159876}.has-success .form-control{border-color:#159876;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075)}.has-success .form-control:focus{border-color:#0f6b53;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #31e2b4;box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #31e2b4}.has-success .input-group-addon{color:#159876;border-color:#159876;background-color:#cdf8ed}.has-success .form-control-feedback{color:#159876}.has-warning .help-block,.has-warning .control-label,.has-warning .radio,.has-warning .checkbox,.has-warning .radio-inline,.has-warning .checkbox-inline,.has-warning.radio label,.has-warning.checkbox label,.has-warning.radio-inline label,.has-warning.checkbox-inline label{color:#fd820a}.has-warning .form-control{border-color:#fd820a;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075)}.has-warning .form-control:focus{border-color:#d26902;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #feb66f;box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #feb66f}.has-warning .input-group-addon{color:#fd820a;border-color:#fd820a;background-color:#ffe9d4}.has-warning .form-control-feedback{color:#fd820a}.has-error .help-block,.has-error .control-label,.has-error .radio,.has-error .checkbox,.has-error .radio-inline,.has-error .checkbox-inline,.has-error.radio label,.has-error.checkbox label,.has-error.radio-inline label,.has-error.checkbox-inline label{color:#eb172e}.has-error .form-control{border-color:#eb172e;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075);box-shadow:inset 0 1px 1px rgba(0,0,0,0.075)}.has-error .form-control:focus{border-color:#bf1023;-webkit-box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #f37583;box-shadow:inset 0 1px 1px rgba(0,0,0,0.075),0 0 6px #f37583}.has-error .input-group-addon{color:#eb172e;border-color:#eb172e;background-color:#fbd3d7}.has-error .form-control-feedback{color:#eb172e}.has-feedback label~.form-control-feedback{top:25px}.has-feedback label.sr-only~.form-control-feedback{top:0}.help-block{display:block;margin-top:5px;margin-bottom:10px;color:#737373}@media (min-width:768px){.form-inline .form-group{display:inline-block;margin-bottom:0;vertical-align:middle}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-static{display:inline-block}.form-inline .input-group{display:inline-table;vertical-align:middle}.form-inline .input-group .input-group-addon,.form-inline .input-group .input-group-btn,.form-inline .input-group .form-control{width:auto}.form-inline .input-group>.form-control{width:100%}.form-inline .control-label{margin-bottom:0;vertical-align:middle}.form-inline .radio,.form-inline .checkbox{display:inline-block;margin-top:0;margin-bottom:0;vertical-align:middle}.form-inline .radio label,.form-inline .checkbox label{padding-left:0}.form-inline .radio input[type="radio"],.form-inline .checkbox input[type="checkbox"]{position:relative;margin-left:0}.form-inline .has-feedback .form-control-feedback{top:0}}.form-horizontal .radio,.form-horizontal .checkbox,.form-horizontal .radio-inline,.form-horizontal .checkbox-inline{margin-top:0;margin-bottom:0;padding-top:7px}.form-horizontal .radio,.form-horizontal .checkbox{min-height:27px}.form-horizontal .form-group{margin-left:-15px;margin-right:-15px}@media (min-width:768px){.form-horizontal .control-label{text-align:right;margin-bottom:0;padding-top:7px}}.form-horizontal .has-feedback .form-control-feedback{right:15px}@media (min-width:768px){.form-horizontal .form-group-lg .control-label{padding-top:14.333333px;font-size:18px}}@media (min-width:768px){.form-horizontal .form-group-sm .control-label{padding-top:6px;font-size:12px}}.btn{display:inline-block;margin-bottom:0;font-weight:normal;text-align:center;vertical-align:middle;-ms-touch-action:manipulation;touch-action:manipulation;cursor:pointer;background-image:none;border:1px solid transparent;white-space:nowrap;padding:6px 12px;font-size:14px;line-height:1.42857143;border-radius:10px;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.btn:focus,.btn:active:focus,.btn.active:focus,.btn.focus,.btn:active.focus,.btn.active.focus{outline:thin dotted;outline:5px auto -webkit-focus-ring-color;outline-offset:-2px}.btn:hover,.btn:focus,.btn.focus{color:#333;text-decoration:none}.btn:active,.btn.active{outline:0;background-image:none;-webkit-box-shadow:inset 0 3px 5px rgba(0,0,0,0.125);box-shadow:inset 0 3px 5px rgba(0,0,0,0.125)}.btn.disabled,.btn[disabled],fieldset[disabled] .btn{cursor:not-allowed;opacity:.65;filter:alpha(opacity=65);-webkit-box-shadow:none;box-shadow:none}a.btn.disabled,fieldset[disabled] a.btn{pointer-events:none}.btn-default{color:#333;background-color:#fff;border-color:#ccc}.btn-default:focus,.btn-default.focus{color:#333;background-color:#e6e6e6;border-color:#8c8c8c}.btn-default:hover{color:#333;background-color:#e6e6e6;border-color:#adadad}.btn-default:active,.btn-default.active,.open>.dropdown-toggle.btn-default{color:#333;background-color:#e6e6e6;border-color:#adadad}.btn-default:active:hover,.btn-default.active:hover,.open>.dropdown-toggle.btn-default:hover,.btn-default:active:focus,.btn-default.active:focus,.open>.dropdown-toggle.btn-default:focus,.btn-default:active.focus,.btn-default.active.focus,.open>.dropdown-toggle.btn-default.focus{color:#333;background-color:#d4d4d4;border-color:#8c8c8c}.btn-default:active,.btn-default.active,.open>.dropdown-toggle.btn-default{background-image:none}.btn-default.disabled,.btn-default[disabled],fieldset[disabled] .btn-default,.btn-default.disabled:hover,.btn-default[disabled]:hover,fieldset[disabled] .btn-default:hover,.btn-default.disabled:focus,.btn-default[disabled]:focus,fieldset[disabled] .btn-default:focus,.btn-default.disabled.focus,.btn-default[disabled].focus,fieldset[disabled] .btn-default.focus,.btn-default.disabled:active,.btn-default[disabled]:active,fieldset[disabled] .btn-default:active,.btn-default.disabled.active,.btn-default[disabled].active,fieldset[disabled] .btn-default.active{background-color:#fff;border-color:#ccc}.btn-default .badge{color:#fff;background-color:#333}.btn-primary{color:#fff;background-color:#ffc700;border-color:#f0bb00}.btn-primary:focus,.btn-primary.focus{color:#fff;background-color:#cc9f00;border-color:#705800}.btn-primary:hover{color:#fff;background-color:#cc9f00;border-color:#b38b00}.btn-primary:active,.btn-primary.active,.open>.dropdown-toggle.btn-primary{color:#fff;background-color:#cc9f00;border-color:#b38b00}.btn-primary:active:hover,.btn-primary.active:hover,.open>.dropdown-toggle.btn-primary:hover,.btn-primary:active:focus,.btn-primary.active:focus,.open>.dropdown-toggle.btn-primary:focus,.btn-primary:active.focus,.btn-primary.active.focus,.open>.dropdown-toggle.btn-primary.focus{color:#fff;background-color:#a88300;border-color:#705800}.btn-primary:active,.btn-primary.active,.open>.dropdown-toggle.btn-primary{background-image:none}.btn-primary.disabled,.btn-primary[disabled],fieldset[disabled] .btn-primary,.btn-primary.disabled:hover,.btn-primary[disabled]:hover,fieldset[disabled] .btn-primary:hover,.btn-primary.disabled:focus,.btn-primary[disabled]:focus,fieldset[disabled] .btn-primary:focus,.btn-primary.disabled.focus,.btn-primary[disabled].focus,fieldset[disabled] .btn-primary.focus,.btn-primary.disabled:active,.btn-primary[disabled]:active,fieldset[disabled] .btn-primary:active,.btn-primary.disabled.active,.btn-primary[disabled].active,fieldset[disabled] .btn-primary.active{background-color:#ffc700;border-color:#f0bb00}.btn-primary .badge{color:#ffc700;background-color:#fff}.btn-success{color:#fff;background-color:#159876;border-color:#128265}.btn-success:focus,.btn-success.focus{color:#fff;background-color:#0f6b53;border-color:#02120e}.btn-success:hover{color:#fff;background-color:#0f6b53;border-color:#0a4c3b}.btn-success:active,.btn-success.active,.open>.dropdown-toggle.btn-success{color:#fff;background-color:#0f6b53;border-color:#0a4c3b}.btn-success:active:hover,.btn-success.active:hover,.open>.dropdown-toggle.btn-success:hover,.btn-success:active:focus,.btn-success.active:focus,.open>.dropdown-toggle.btn-success:focus,.btn-success:active.focus,.btn-success.active.focus,.open>.dropdown-toggle.btn-success.focus{color:#fff;background-color:#0a4c3b;border-color:#02120e}.btn-success:active,.btn-success.active,.open>.dropdown-toggle.btn-success{background-image:none}.btn-success.disabled,.btn-success[disabled],fieldset[disabled] .btn-success,.btn-success.disabled:hover,.btn-success[disabled]:hover,fieldset[disabled] .btn-success:hover,.btn-success.disabled:focus,.btn-success[disabled]:focus,fieldset[disabled] .btn-success:focus,.btn-success.disabled.focus,.btn-success[disabled].focus,fieldset[disabled] .btn-success.focus,.btn-success.disabled:active,.btn-success[disabled]:active,fieldset[disabled] .btn-success:active,.btn-success.disabled.active,.btn-success[disabled].active,fieldset[disabled] .btn-success.active{background-color:#159876;border-color:#128265}.btn-success .badge{color:#159876;background-color:#fff}.btn-info{color:#fff;background-color:#2c3a80;border-color:#25316d}.btn-info:focus,.btn-info.focus{color:#fff;background-color:#1f295a;border-color:#05060e}.btn-info:hover{color:#fff;background-color:#1f295a;border-color:#161d3f}.btn-info:active,.btn-info.active,.open>.dropdown-toggle.btn-info{color:#fff;background-color:#1f295a;border-color:#161d3f}.btn-info:active:hover,.btn-info.active:hover,.open>.dropdown-toggle.btn-info:hover,.btn-info:active:focus,.btn-info.active:focus,.open>.dropdown-toggle.btn-info:focus,.btn-info:active.focus,.btn-info.active.focus,.open>.dropdown-toggle.btn-info.focus{color:#fff;background-color:#161d3f;border-color:#05060e}.btn-info:active,.btn-info.active,.open>.dropdown-toggle.btn-info{background-image:none}.btn-info.disabled,.btn-info[disabled],fieldset[disabled] .btn-info,.btn-info.disabled:hover,.btn-info[disabled]:hover,fieldset[disabled] .btn-info:hover,.btn-info.disabled:focus,.btn-info[disabled]:focus,fieldset[disabled] .btn-info:focus,.btn-info.disabled.focus,.btn-info[disabled].focus,fieldset[disabled] .btn-info.focus,.btn-info.disabled:active,.btn-info[disabled]:active,fieldset[disabled] .btn-info:active,.btn-info.disabled.active,.btn-info[disabled].active,fieldset[disabled] .btn-info.active{background-color:#2c3a80;border-color:#25316d}.btn-info .badge{color:#2c3a80;background-color:#fff}.btn-warning{color:#fff;background-color:#fd820a;border-color:#ec7502}.btn-warning:focus,.btn-warning.focus{color:#fff;background-color:#d26902;border-color:#6d3601}.btn-warning:hover{color:#fff;background-color:#d26902;border-color:#af5701}.btn-warning:active,.btn-warning.active,.open>.dropdown-toggle.btn-warning{color:#fff;background-color:#d26902;border-color:#af5701}.btn-warning:active:hover,.btn-warning.active:hover,.open>.dropdown-toggle.btn-warning:hover,.btn-warning:active:focus,.btn-warning.active:focus,.open>.dropdown-toggle.btn-warning:focus,.btn-warning:active.focus,.btn-warning.active.focus,.open>.dropdown-toggle.btn-warning.focus{color:#fff;background-color:#af5701;border-color:#6d3601}.btn-warning:active,.btn-warning.active,.open>.dropdown-toggle.btn-warning{background-image:none}.btn-warning.disabled,.btn-warning[disabled],fieldset[disabled] .btn-warning,.btn-warning.disabled:hover,.btn-warning[disabled]:hover,fieldset[disabled] .btn-warning:hover,.btn-warning.disabled:focus,.btn-warning[disabled]:focus,fieldset[disabled] .btn-warning:focus,.btn-warning.disabled.focus,.btn-warning[disabled].focus,fieldset[disabled] .btn-warning.focus,.btn-warning.disabled:active,.btn-warning[disabled]:active,fieldset[disabled] .btn-warning:active,.btn-warning.disabled.active,.btn-warning[disabled].active,fieldset[disabled] .btn-warning.active{background-color:#fd820a;border-color:#ec7502}.btn-warning .badge{color:#fd820a;background-color:#fff}.btn-danger{color:#fff;background-color:#eb172e;border-color:#d61228}.btn-danger:focus,.btn-danger.focus{color:#fff;background-color:#bf1023;border-color:#610812}.btn-danger:hover{color:#fff;background-color:#bf1023;border-color:#9e0e1d}.btn-danger:active,.btn-danger.active,.open>.dropdown-toggle.btn-danger{color:#fff;background-color:#bf1023;border-color:#9e0e1d}.btn-danger:active:hover,.btn-danger.active:hover,.open>.dropdown-toggle.btn-danger:hover,.btn-danger:active:focus,.btn-danger.active:focus,.open>.dropdown-toggle.btn-danger:focus,.btn-danger:active.focus,.btn-danger.active.focus,.open>.dropdown-toggle.btn-danger.focus{color:#fff;background-color:#9e0e1d;border-color:#610812}.btn-danger:active,.btn-danger.active,.open>.dropdown-toggle.btn-danger{background-image:none}.btn-danger.disabled,.btn-danger[disabled],fieldset[disabled] .btn-danger,.btn-danger.disabled:hover,.btn-danger[disabled]:hover,fieldset[disabled] .btn-danger:hover,.btn-danger.disabled:focus,.btn-danger[disabled]:focus,fieldset[disabled] .btn-danger:focus,.btn-danger.disabled.focus,.btn-danger[disabled].focus,fieldset[disabled] .btn-danger.focus,.btn-danger.disabled:active,.btn-danger[disabled]:active,fieldset[disabled] .btn-danger:active,.btn-danger.disabled.active,.btn-danger[disabled].active,fieldset[disabled] .btn-danger.active{background-color:#eb172e;border-color:#d61228}.btn-danger .badge{color:#eb172e;background-color:#fff}.btn-link{color:#2c3a80;font-weight:normal;border-radius:0}.btn-link,.btn-link:active,.btn-link.active,.btn-link[disabled],fieldset[disabled] .btn-link{background-color:transparent;-webkit-box-shadow:none;box-shadow:none}.btn-link,.btn-link:hover,.btn-link:focus,.btn-link:active{border-color:transparent}.btn-link:hover,.btn-link:focus{color:#25316d;text-decoration:underline;background-color:transparent}.btn-link[disabled]:hover,fieldset[disabled] .btn-link:hover,.btn-link[disabled]:focus,fieldset[disabled] .btn-link:focus{color:#777;text-decoration:none}.btn-lg,.btn-group-lg>.btn{padding:10px 16px;font-size:18px;line-height:1.3333333;border-radius:12px}.btn-sm,.btn-group-sm>.btn{padding:5px 10px;font-size:12px;line-height:1.5;border-radius:8px}.btn-xs,.btn-group-xs>.btn{padding:1px 5px;font-size:12px;line-height:1.5;border-radius:8px}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:5px}input[type="submit"].btn-block,input[type="reset"].btn-block,input[type="button"].btn-block{width:100%}.fade{opacity:0;-webkit-transition:opacity .15s linear;-o-transition:opacity .15s linear;transition:opacity .15s linear}.fade.in{opacity:1}.collapse{display:none}.collapse.in{display:block}tr.collapse.in{display:table-row}tbody.collapse.in{display:table-row-group}.collapsing{position:relative;height:0;overflow:hidden;-webkit-transition-property:height, visibility;-o-transition-property:height, visibility;transition-property:height, visibility;-webkit-transition-duration:.35s;-o-transition-duration:.35s;transition-duration:.35s;-webkit-transition-timing-function:ease;-o-transition-timing-function:ease;transition-timing-function:ease}.caret{display:inline-block;width:0;height:0;margin-left:2px;vertical-align:middle;border-top:4px dashed;border-top:4px solid \9;border-right:4px solid transparent;border-left:4px solid transparent}.dropup,.dropdown{position:relative}.dropdown-toggle:focus{outline:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:160px;padding:5px 0;margin:2px 0 0;list-style:none;font-size:14px;text-align:left;background-color:#fff;border:1px solid #ccc;border:1px solid rgba(0,0,0,0.15);border-radius:10px;-webkit-box-shadow:0 6px 12px rgba(0,0,0,0.175);box-shadow:0 6px 12px rgba(0,0,0,0.175);-webkit-background-clip:padding-box;background-clip:padding-box}.dropdown-menu.pull-right{right:0;left:auto}.dropdown-menu .divider{height:1px;margin:9px 0;overflow:hidden;background-color:#e5e5e5}.dropdown-menu>li>a{display:block;padding:3px 20px;clear:both;font-weight:normal;line-height:1.42857143;color:#333;white-space:nowrap}.dropdown-menu>li>a:hover,.dropdown-menu>li>a:focus{text-decoration:none;color:#262626;background-color:#f5f5f5}.dropdown-menu>.active>a,.dropdown-menu>.active>a:hover,.dropdown-menu>.active>a:focus{color:#fff;text-decoration:none;outline:0;background-color:#ffc700}.dropdown-menu>.disabled>a,.dropdown-menu>.disabled>a:hover,.dropdown-menu>.disabled>a:focus{color:#777}.dropdown-menu>.disabled>a:hover,.dropdown-menu>.disabled>a:focus{text-decoration:none;background-color:transparent;background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);cursor:not-allowed}.open>.dropdown-menu{display:block}.open>a{outline:0}.dropdown-menu-right{left:auto;right:0}.dropdown-menu-left{left:0;right:auto}.dropdown-header{display:block;padding:3px 20px;font-size:12px;line-height:1.42857143;color:#777;white-space:nowrap}.dropdown-backdrop{position:fixed;left:0;right:0;bottom:0;top:0;z-index:990}.pull-right>.dropdown-menu{right:0;left:auto}.dropup .caret,.navbar-fixed-bottom .dropdown .caret{border-top:0;border-bottom:4px dashed;border-bottom:4px solid \9;content:""}.dropup .dropdown-menu,.navbar-fixed-bottom .dropdown .dropdown-menu{top:auto;bottom:100%;margin-bottom:2px}@media (min-width:768px){.navbar-right .dropdown-menu{left:auto;right:0}.navbar-right .dropdown-menu-left{left:0;right:auto}}.btn-group,.btn-group-vertical{position:relative;display:inline-block;vertical-align:middle}.btn-group>.btn,.btn-group-vertical>.btn{position:relative;float:left}.btn-group>.btn:hover,.btn-group-vertical>.btn:hover,.btn-group>.btn:focus,.btn-group-vertical>.btn:focus,.btn-group>.btn:active,.btn-group-vertical>.btn:active,.btn-group>.btn.active,.btn-group-vertical>.btn.active{z-index:2}.btn-group .btn+.btn,.btn-group .btn+.btn-group,.btn-group .btn-group+.btn,.btn-group .btn-group+.btn-group{margin-left:-1px}.btn-toolbar{margin-left:-5px}.btn-toolbar .btn,.btn-toolbar .btn-group,.btn-toolbar .input-group{float:left}.btn-toolbar>.btn,.btn-toolbar>.btn-group,.btn-toolbar>.input-group{margin-left:5px}.btn-group>.btn:not(:first-child):not(:last-child):not(.dropdown-toggle){border-radius:0}.btn-group>.btn:first-child{margin-left:0}.btn-group>.btn:first-child:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-top-right-radius:0}.btn-group>.btn:last-child:not(:first-child),.btn-group>.dropdown-toggle:not(:first-child){border-bottom-left-radius:0;border-top-left-radius:0}.btn-group>.btn-group{float:left}.btn-group>.btn-group:not(:first-child):not(:last-child)>.btn{border-radius:0}.btn-group>.btn-group:first-child:not(:last-child)>.btn:last-child,.btn-group>.btn-group:first-child:not(:last-child)>.dropdown-toggle{border-bottom-right-radius:0;border-top-right-radius:0}.btn-group>.btn-group:last-child:not(:first-child)>.btn:first-child{border-bottom-left-radius:0;border-top-left-radius:0}.btn-group .dropdown-toggle:active,.btn-group.open .dropdown-toggle{outline:0}.btn-group>.btn+.dropdown-toggle{padding-left:8px;padding-right:8px}.btn-group>.btn-lg+.dropdown-toggle{padding-left:12px;padding-right:12px}.btn-group.open .dropdown-toggle{-webkit-box-shadow:inset 0 3px 5px rgba(0,0,0,0.125);box-shadow:inset 0 3px 5px rgba(0,0,0,0.125)}.btn-group.open .dropdown-toggle.btn-link{-webkit-box-shadow:none;box-shadow:none}.btn .caret{margin-left:0}.btn-lg .caret{border-width:5px 5px 0;border-bottom-width:0}.dropup .btn-lg .caret{border-width:0 5px 5px}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group,.btn-group-vertical>.btn-group>.btn{display:block;float:none;width:100%;max-width:100%}.btn-group-vertical>.btn-group>.btn{float:none}.btn-group-vertical>.btn+.btn,.btn-group-vertical>.btn+.btn-group,.btn-group-vertical>.btn-group+.btn,.btn-group-vertical>.btn-group+.btn-group{margin-top:-1px;margin-left:0}.btn-group-vertical>.btn:not(:first-child):not(:last-child){border-radius:0}.btn-group-vertical>.btn:first-child:not(:last-child){border-top-right-radius:10px;border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn:last-child:not(:first-child){border-bottom-left-radius:10px;border-top-right-radius:0;border-top-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child):not(:last-child)>.btn{border-radius:0}.btn-group-vertical>.btn-group:first-child:not(:last-child)>.btn:last-child,.btn-group-vertical>.btn-group:first-child:not(:last-child)>.dropdown-toggle{border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:last-child:not(:first-child)>.btn:first-child{border-top-right-radius:0;border-top-left-radius:0}.btn-group-justified{display:table;width:100%;table-layout:fixed;border-collapse:separate}.btn-group-justified>.btn,.btn-group-justified>.btn-group{float:none;display:table-cell;width:1%}.btn-group-justified>.btn-group .btn{width:100%}.btn-group-justified>.btn-group .dropdown-menu{left:auto}[data-toggle="buttons"]>.btn input[type="radio"],[data-toggle="buttons"]>.btn-group>.btn input[type="radio"],[data-toggle="buttons"]>.btn input[type="checkbox"],[data-toggle="buttons"]>.btn-group>.btn input[type="checkbox"]{position:absolute;clip:rect(0, 0, 0, 0);pointer-events:none}.input-group{position:relative;display:table;border-collapse:separate}.input-group[class*="col-"]{float:none;padding-left:0;padding-right:0}.input-group .form-control{position:relative;z-index:2;float:left;width:100%;margin-bottom:0}.input-group-lg>.form-control,.input-group-lg>.input-group-addon,.input-group-lg>.input-group-btn>.btn{height:46px;padding:10px 16px;font-size:18px;line-height:1.3333333;border-radius:12px}select.input-group-lg>.form-control,select.input-group-lg>.input-group-addon,select.input-group-lg>.input-group-btn>.btn{height:46px;line-height:46px}textarea.input-group-lg>.form-control,textarea.input-group-lg>.input-group-addon,textarea.input-group-lg>.input-group-btn>.btn,select[multiple].input-group-lg>.form-control,select[multiple].input-group-lg>.input-group-addon,select[multiple].input-group-lg>.input-group-btn>.btn{height:auto}.input-group-sm>.form-control,.input-group-sm>.input-group-addon,.input-group-sm>.input-group-btn>.btn{height:30px;padding:5px 10px;font-size:12px;line-height:1.5;border-radius:8px}select.input-group-sm>.form-control,select.input-group-sm>.input-group-addon,select.input-group-sm>.input-group-btn>.btn{height:30px;line-height:30px}textarea.input-group-sm>.form-control,textarea.input-group-sm>.input-group-addon,textarea.input-group-sm>.input-group-btn>.btn,select[multiple].input-group-sm>.form-control,select[multiple].input-group-sm>.input-group-addon,select[multiple].input-group-sm>.input-group-btn>.btn{height:auto}.input-group-addon,.input-group-btn,.input-group .form-control{display:table-cell}.input-group-addon:not(:first-child):not(:last-child),.input-group-btn:not(:first-child):not(:last-child),.input-group .form-control:not(:first-child):not(:last-child){border-radius:0}.input-group-addon,.input-group-btn{width:1%;white-space:nowrap;vertical-align:middle}.input-group-addon{padding:6px 12px;font-size:14px;font-weight:normal;line-height:1;color:#555;text-align:center;background-color:#eee;border:1px solid #ccc;border-radius:10px}.input-group-addon.input-sm{padding:5px 10px;font-size:12px;border-radius:8px}.input-group-addon.input-lg{padding:10px 16px;font-size:18px;border-radius:12px}.input-group-addon input[type="radio"],.input-group-addon input[type="checkbox"]{margin-top:0}.input-group .form-control:first-child,.input-group-addon:first-child,.input-group-btn:first-child>.btn,.input-group-btn:first-child>.btn-group>.btn,.input-group-btn:first-child>.dropdown-toggle,.input-group-btn:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group-btn:last-child>.btn-group:not(:last-child)>.btn{border-bottom-right-radius:0;border-top-right-radius:0}.input-group-addon:first-child{border-right:0}.input-group .form-control:last-child,.input-group-addon:last-child,.input-group-btn:last-child>.btn,.input-group-btn:last-child>.btn-group>.btn,.input-group-btn:last-child>.dropdown-toggle,.input-group-btn:first-child>.btn:not(:first-child),.input-group-btn:first-child>.btn-group:not(:first-child)>.btn{border-bottom-left-radius:0;border-top-left-radius:0}.input-group-addon:last-child{border-left:0}.input-group-btn{position:relative;font-size:0;white-space:nowrap}.input-group-btn>.btn{position:relative}.input-group-btn>.btn+.btn{margin-left:-1px}.input-group-btn>.btn:hover,.input-group-btn>.btn:focus,.input-group-btn>.btn:active{z-index:2}.input-group-btn:first-child>.btn,.input-group-btn:first-child>.btn-group{margin-right:-1px}.input-group-btn:last-child>.btn,.input-group-btn:last-child>.btn-group{z-index:2;margin-left:-1px}.nav{margin-bottom:0;padding-left:0;list-style:none}.nav>li{position:relative;display:block}.nav>li>a{position:relative;display:block;padding:10px 15px}.nav>li>a:hover,.nav>li>a:focus{text-decoration:none;background-color:#eee}.nav>li.disabled>a{color:#777}.nav>li.disabled>a:hover,.nav>li.disabled>a:focus{color:#777;text-decoration:none;background-color:transparent;cursor:not-allowed}.nav .open>a,.nav .open>a:hover,.nav .open>a:focus{background-color:#eee;border-color:#2c3a80}.nav .nav-divider{height:1px;margin:9px 0;overflow:hidden;background-color:#e5e5e5}.nav>li>a>img{max-width:none}.nav-tabs{border-bottom:1px solid #ddd}.nav-tabs>li{float:left;margin-bottom:-1px}.nav-tabs>li>a{margin-right:2px;line-height:1.42857143;border:1px solid transparent;border-radius:10px 10px 0 0}.nav-tabs>li>a:hover{border-color:#eee #eee #ddd}.nav-tabs>li.active>a,.nav-tabs>li.active>a:hover,.nav-tabs>li.active>a:focus{color:#555;background-color:#fafafa;border:1px solid #ddd;border-bottom-color:transparent;cursor:default}.nav-tabs.nav-justified{width:100%;border-bottom:0}.nav-tabs.nav-justified>li{float:none}.nav-tabs.nav-justified>li>a{text-align:center;margin-bottom:5px}.nav-tabs.nav-justified>.dropdown .dropdown-menu{top:auto;left:auto}@media (min-width:768px){.nav-tabs.nav-justified>li{display:table-cell;width:1%}.nav-tabs.nav-justified>li>a{margin-bottom:0}}.nav-tabs.nav-justified>li>a{margin-right:0;border-radius:10px}.nav-tabs.nav-justified>.active>a,.nav-tabs.nav-justified>.active>a:hover,.nav-tabs.nav-justified>.active>a:focus{border:1px solid #ddd}@media (min-width:768px){.nav-tabs.nav-justified>li>a{border-bottom:1px solid #ddd;border-radius:10px 10px 0 0}.nav-tabs.nav-justified>.active>a,.nav-tabs.nav-justified>.active>a:hover,.nav-tabs.nav-justified>.active>a:focus{border-bottom-color:#fafafa}}.nav-pills>li{float:left}.nav-pills>li>a{border-radius:10px}.nav-pills>li+li{margin-left:2px}.nav-pills>li.active>a,.nav-pills>li.active>a:hover,.nav-pills>li.active>a:focus{color:#fff;background-color:#ffc700}.nav-stacked>li{float:none}.nav-stacked>li+li{margin-top:2px;margin-left:0}.nav-justified{width:100%}.nav-justified>li{float:none}.nav-justified>li>a{text-align:center;margin-bottom:5px}.nav-justified>.dropdown .dropdown-menu{top:auto;left:auto}@media (min-width:768px){.nav-justified>li{display:table-cell;width:1%}.nav-justified>li>a{margin-bottom:0}}.nav-tabs-justified{border-bottom:0}.nav-tabs-justified>li>a{margin-right:0;border-radius:10px}.nav-tabs-justified>.active>a,.nav-tabs-justified>.active>a:hover,.nav-tabs-justified>.active>a:focus{border:1px solid #ddd}@media (min-width:768px){.nav-tabs-justified>li>a{border-bottom:1px solid #ddd;border-radius:10px 10px 0 0}.nav-tabs-justified>.active>a,.nav-tabs-justified>.active>a:hover,.nav-tabs-justified>.active>a:focus{border-bottom-color:#fafafa}}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-right-radius:0;border-top-left-radius:0}.navbar{position:relative;min-height:60px;margin-bottom:0;border:1px solid transparent}@media (min-width:768px){.navbar{border-radius:0}}@media (min-width:768px){.navbar-header{float:left}}.navbar-collapse{overflow-x:visible;padding-right:15px;padding-left:15px;border-top:1px solid transparent;-webkit-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1);box-shadow:inset 0 1px 0 rgba(255,255,255,0.1);-webkit-overflow-scrolling:touch}.navbar-collapse.in{overflow-y:auto}@media (min-width:768px){.navbar-collapse{width:auto;border-top:0;-webkit-box-shadow:none;box-shadow:none}.navbar-collapse.collapse{display:block !important;height:auto !important;padding-bottom:0;overflow:visible !important}.navbar-collapse.in{overflow-y:visible}.navbar-fixed-top .navbar-collapse,.navbar-static-top .navbar-collapse,.navbar-fixed-bottom .navbar-collapse{padding-left:0;padding-right:0}}.navbar-fixed-top .navbar-collapse,.navbar-fixed-bottom .navbar-collapse{max-height:340px}@media (max-device-width:480px) and (orientation:landscape){.navbar-fixed-top .navbar-collapse,.navbar-fixed-bottom .navbar-collapse{max-height:200px}}.container>.navbar-header,.container-fluid>.navbar-header,.container>.navbar-collapse,.container-fluid>.navbar-collapse{margin-right:-15px;margin-left:-15px}@media (min-width:768px){.container>.navbar-header,.container-fluid>.navbar-header,.container>.navbar-collapse,.container-fluid>.navbar-collapse{margin-right:0;margin-left:0}}.navbar-static-top{z-index:1000;border-width:0 0 1px}@media (min-width:768px){.navbar-static-top{border-radius:0}}.navbar-fixed-top,.navbar-fixed-bottom{position:fixed;right:0;left:0;z-index:1030}@media (min-width:768px){.navbar-fixed-top,.navbar-fixed-bottom{border-radius:0}}.navbar-fixed-top{top:0;border-width:0 0 1px}.navbar-fixed-bottom{bottom:0;margin-bottom:0;border-width:1px 0 0}.navbar-brand{float:left;padding:20px 15px;font-size:18px;line-height:20px;height:60px}.navbar-brand:hover,.navbar-brand:focus{text-decoration:none}.navbar-brand>img{display:block}@media (min-width:768px){.navbar>.container .navbar-brand,.navbar>.container-fluid .navbar-brand{margin-left:-15px}}.navbar-toggle{position:relative;float:right;margin-right:15px;padding:9px 10px;margin-top:13px;margin-bottom:13px;background-color:transparent;background-image:none;border:1px solid transparent;border-radius:10px}.navbar-toggle:focus{outline:0}.navbar-toggle .icon-bar{display:block;width:22px;height:2px;border-radius:1px}.navbar-toggle .icon-bar+.icon-bar{margin-top:4px}@media (min-width:768px){.navbar-toggle{display:none}}.navbar-nav{margin:10px -15px}.navbar-nav>li>a{padding-top:10px;padding-bottom:10px;line-height:20px}@media (max-width:767px){.navbar-nav .open .dropdown-menu{position:static;float:none;width:auto;margin-top:0;background-color:transparent;border:0;-webkit-box-shadow:none;box-shadow:none}.navbar-nav .open .dropdown-menu>li>a,.navbar-nav .open .dropdown-menu .dropdown-header{padding:5px 15px 5px 25px}.navbar-nav .open .dropdown-menu>li>a{line-height:20px}.navbar-nav .open .dropdown-menu>li>a:hover,.navbar-nav .open .dropdown-menu>li>a:focus{background-image:none}}@media (min-width:768px){.navbar-nav{float:left;margin:0}.navbar-nav>li{float:left}.navbar-nav>li>a{padding-top:20px;padding-bottom:20px}}.navbar-form{margin-left:-15px;margin-right:-15px;padding:10px 15px;border-top:1px solid transparent;border-bottom:1px solid transparent;-webkit-box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.1);box-shadow:inset 0 1px 0 rgba(255,255,255,0.1),0 1px 0 rgba(255,255,255,0.1);margin-top:13px;margin-bottom:13px}@media (min-width:768px){.navbar-form .form-group{display:inline-block;margin-bottom:0;vertical-align:middle}.navbar-form .form-control{display:inline-block;width:auto;vertical-align:middle}.navbar-form .form-control-static{display:inline-block}.navbar-form .input-group{display:inline-table;vertical-align:middle}.navbar-form .input-group .input-group-addon,.navbar-form .input-group .input-group-btn,.navbar-form .input-group .form-control{width:auto}.navbar-form .input-group>.form-control{width:100%}.navbar-form .control-label{margin-bottom:0;vertical-align:middle}.navbar-form .radio,.navbar-form .checkbox{display:inline-block;margin-top:0;margin-bottom:0;vertical-align:middle}.navbar-form .radio label,.navbar-form .checkbox label{padding-left:0}.navbar-form .radio input[type="radio"],.navbar-form .checkbox input[type="checkbox"]{position:relative;margin-left:0}.navbar-form .has-feedback .form-control-feedback{top:0}}@media (max-width:767px){.navbar-form .form-group{margin-bottom:5px}.navbar-form .form-group:last-child{margin-bottom:0}}@media (min-width:768px){.navbar-form{width:auto;border:0;margin-left:0;margin-right:0;padding-top:0;padding-bottom:0;-webkit-box-shadow:none;box-shadow:none}}.navbar-nav>li>.dropdown-menu{margin-top:0;border-top-right-radius:0;border-top-left-radius:0}.navbar-fixed-bottom .navbar-nav>li>.dropdown-menu{margin-bottom:0;border-top-right-radius:0;border-top-left-radius:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.navbar-btn{margin-top:13px;margin-bottom:13px}.navbar-btn.btn-sm{margin-top:15px;margin-bottom:15px}.navbar-btn.btn-xs{margin-top:19px;margin-bottom:19px}.navbar-text{margin-top:20px;margin-bottom:20px}@media (min-width:768px){.navbar-text{float:left;margin-left:15px;margin-right:15px}}@media (min-width:768px){.navbar-left{float:left !important}.navbar-right{float:right !important;margin-right:-15px}.navbar-right~.navbar-right{margin-right:0}}.navbar-default{background-color:#1c1d36;border-color:0}.navbar-default .navbar-brand{color:#ffc700}.navbar-default .navbar-brand:hover,.navbar-default .navbar-brand:focus{color:#cc9f00;background-color:transparent}.navbar-default .navbar-text{color:#eee}.navbar-default .navbar-nav>li>a{color:#eee}.navbar-default .navbar-nav>li>a:hover,.navbar-default .navbar-nav>li>a:focus{color:#fff;background-color:#ffd84d}.navbar-default .navbar-nav>.active>a,.navbar-default .navbar-nav>.active>a:hover,.navbar-default .navbar-nav>.active>a:focus{color:#fff;background-color:#ffc700}.navbar-default .navbar-nav>.disabled>a,.navbar-default .navbar-nav>.disabled>a:hover,.navbar-default .navbar-nav>.disabled>a:focus{color:#ccc;background-color:transparent}.navbar-default .navbar-toggle{border-color:#ddd}.navbar-default .navbar-toggle:hover,.navbar-default .navbar-toggle:focus{background-color:#ddd}.navbar-default .navbar-toggle .icon-bar{background-color:#888}.navbar-default .navbar-collapse,.navbar-default .navbar-form{border-color:0}.navbar-default .navbar-nav>.open>a,.navbar-default .navbar-nav>.open>a:hover,.navbar-default .navbar-nav>.open>a:focus{background-color:#ffc700;color:#fff}@media (max-width:767px){.navbar-default .navbar-nav .open .dropdown-menu>li>a{color:#eee}.navbar-default .navbar-nav .open .dropdown-menu>li>a:hover,.navbar-default .navbar-nav .open .dropdown-menu>li>a:focus{color:#fff;background-color:#ffd84d}.navbar-default .navbar-nav .open .dropdown-menu>.active>a,.navbar-default .navbar-nav .open .dropdown-menu>.active>a:hover,.navbar-default .navbar-nav .open .dropdown-menu>.active>a:focus{color:#fff;background-color:#ffc700}.navbar-default .navbar-nav .open .dropdown-menu>.disabled>a,.navbar-default .navbar-nav .open .dropdown-menu>.disabled>a:hover,.navbar-default .navbar-nav .open .dropdown-menu>.disabled>a:focus{color:#ccc;background-color:transparent}}.navbar-default .navbar-link{color:#eee}.navbar-default .navbar-link:hover{color:#fff}.navbar-default .btn-link{color:#eee}.navbar-default .btn-link:hover,.navbar-default .btn-link:focus{color:#fff}.navbar-default .btn-link[disabled]:hover,fieldset[disabled] .navbar-default .btn-link:hover,.navbar-default .btn-link[disabled]:focus,fieldset[disabled] .navbar-default .btn-link:focus{color:#ccc}.navbar-inverse{background-color:#222;border-color:#080808}.navbar-inverse .navbar-brand{color:#9d9d9d}.navbar-inverse .navbar-brand:hover,.navbar-inverse .navbar-brand:focus{color:#fff;background-color:transparent}.navbar-inverse .navbar-text{color:#9d9d9d}.navbar-inverse .navbar-nav>li>a{color:#9d9d9d}.navbar-inverse .navbar-nav>li>a:hover,.navbar-inverse .navbar-nav>li>a:focus{color:#fff;background-color:transparent}.navbar-inverse .navbar-nav>.active>a,.navbar-inverse .navbar-nav>.active>a:hover,.navbar-inverse .navbar-nav>.active>a:focus{color:#fff;background-color:#080808}.navbar-inverse .navbar-nav>.disabled>a,.navbar-inverse .navbar-nav>.disabled>a:hover,.navbar-inverse .navbar-nav>.disabled>a:focus{color:#444;background-color:transparent}.navbar-inverse .navbar-toggle{border-color:#333}.navbar-inverse .navbar-toggle:hover,.navbar-inverse .navbar-toggle:focus{background-color:#333}.navbar-inverse .navbar-toggle .icon-bar{background-color:#fff}.navbar-inverse .navbar-collapse,.navbar-inverse .navbar-form{border-color:#101010}.navbar-inverse .navbar-nav>.open>a,.navbar-inverse .navbar-nav>.open>a:hover,.navbar-inverse .navbar-nav>.open>a:focus{background-color:#080808;color:#fff}@media (max-width:767px){.navbar-inverse .navbar-nav .open .dropdown-menu>.dropdown-header{border-color:#080808}.navbar-inverse .navbar-nav .open .dropdown-menu .divider{background-color:#080808}.navbar-inverse .navbar-nav .open .dropdown-menu>li>a{color:#9d9d9d}.navbar-inverse .navbar-nav .open .dropdown-menu>li>a:hover,.navbar-inverse .navbar-nav .open .dropdown-menu>li>a:focus{color:#fff;background-color:transparent}.navbar-inverse .navbar-nav .open .dropdown-menu>.active>a,.navbar-inverse .navbar-nav .open .dropdown-menu>.active>a:hover,.navbar-inverse .navbar-nav .open .dropdown-menu>.active>a:focus{color:#fff;background-color:#080808}.navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a,.navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a:hover,.navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a:focus{color:#444;background-color:transparent}}.navbar-inverse .navbar-link{color:#9d9d9d}.navbar-inverse .navbar-link:hover{color:#fff}.navbar-inverse .btn-link{color:#9d9d9d}.navbar-inverse .btn-link:hover,.navbar-inverse .btn-link:focus{color:#fff}.navbar-inverse .btn-link[disabled]:hover,fieldset[disabled] .navbar-inverse .btn-link:hover,.navbar-inverse .btn-link[disabled]:focus,fieldset[disabled] .navbar-inverse .btn-link:focus{color:#444}.breadcrumb{padding:8px 15px;margin-bottom:20px;list-style:none;background-color:#f5f5f5;border-radius:10px}.breadcrumb>li{display:inline-block}.breadcrumb>li+li:before{content:"/\00a0";padding:0 5px;color:#ccc}.breadcrumb>.active{color:#777}.pagination{display:inline-block;padding-left:0;margin:20px 0;border-radius:10px}.pagination>li{display:inline}.pagination>li>a,.pagination>li>span{position:relative;float:left;padding:6px 12px;line-height:1.42857143;text-decoration:none;color:#2c3a80;background-color:#fff;border:1px solid #ddd;margin-left:-1px}.pagination>li:first-child>a,.pagination>li:first-child>span{margin-left:0;border-bottom-left-radius:10px;border-top-left-radius:10px}.pagination>li:last-child>a,.pagination>li:last-child>span{border-bottom-right-radius:10px;border-top-right-radius:10px}.pagination>li>a:hover,.pagination>li>span:hover,.pagination>li>a:focus,.pagination>li>span:focus{z-index:3;color:#25316d;background-color:#eee;border-color:#ddd}.pagination>.active>a,.pagination>.active>span,.pagination>.active>a:hover,.pagination>.active>span:hover,.pagination>.active>a:focus,.pagination>.active>span:focus{z-index:2;color:#fff;background-color:#ffc700;border-color:#ffc700;cursor:default}.pagination>.disabled>span,.pagination>.disabled>span:hover,.pagination>.disabled>span:focus,.pagination>.disabled>a,.pagination>.disabled>a:hover,.pagination>.disabled>a:focus{color:#777;background-color:#fff;border-color:#ddd;cursor:not-allowed}.pagination-lg>li>a,.pagination-lg>li>span{padding:10px 16px;font-size:18px;line-height:1.3333333}.pagination-lg>li:first-child>a,.pagination-lg>li:first-child>span{border-bottom-left-radius:12px;border-top-left-radius:12px}.pagination-lg>li:last-child>a,.pagination-lg>li:last-child>span{border-bottom-right-radius:12px;border-top-right-radius:12px}.pagination-sm>li>a,.pagination-sm>li>span{padding:5px 10px;font-size:12px;line-height:1.5}.pagination-sm>li:first-child>a,.pagination-sm>li:first-child>span{border-bottom-left-radius:8px;border-top-left-radius:8px}.pagination-sm>li:last-child>a,.pagination-sm>li:last-child>span{border-bottom-right-radius:8px;border-top-right-radius:8px}.pager{padding-left:0;margin:20px 0;list-style:none;text-align:center}.pager li{display:inline}.pager li>a,.pager li>span{display:inline-block;padding:5px 14px;background-color:#fff;border:1px solid #ddd;border-radius:15px}.pager li>a:hover,.pager li>a:focus{text-decoration:none;background-color:#eee}.pager .next>a,.pager .next>span{float:right}.pager .previous>a,.pager .previous>span{float:left}.pager .disabled>a,.pager .disabled>a:hover,.pager .disabled>a:focus,.pager .disabled>span{color:#777;background-color:#fff;cursor:not-allowed}.label{display:inline;padding:.2em .6em .3em;font-size:75%;font-weight:bold;line-height:1;color:#fff;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25em}a.label:hover,a.label:focus{color:#fff;text-decoration:none;cursor:pointer}.label:empty{display:none}.btn .label{position:relative;top:-1px}.label-default{background-color:#777}.label-default[href]:hover,.label-default[href]:focus{background-color:#5e5e5e}.label-primary{background-color:#ffc700}.label-primary[href]:hover,.label-primary[href]:focus{background-color:#cc9f00}.label-success{background-color:#159876}.label-success[href]:hover,.label-success[href]:focus{background-color:#0f6b53}.label-info{background-color:#2c3a80}.label-info[href]:hover,.label-info[href]:focus{background-color:#1f295a}.label-warning{background-color:#fd820a}.label-warning[href]:hover,.label-warning[href]:focus{background-color:#d26902}.label-danger{background-color:#eb172e}.label-danger[href]:hover,.label-danger[href]:focus{background-color:#bf1023}.badge{display:inline-block;min-width:10px;padding:3px 7px;font-size:12px;font-weight:bold;color:#fff;line-height:1;vertical-align:middle;white-space:nowrap;text-align:center;background-color:#777;border-radius:10px}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.btn-xs .badge,.btn-group-xs>.btn .badge{top:0;padding:1px 5px}a.badge:hover,a.badge:focus{color:#fff;text-decoration:none;cursor:pointer}.list-group-item.active>.badge,.nav-pills>.active>a>.badge{color:#2c3a80;background-color:#fff}.list-group-item>.badge{float:right}.list-group-item>.badge+.badge{margin-right:5px}.nav-pills>li>a>.badge{margin-left:3px}.jumbotron{padding-top:30px;padding-bottom:30px;margin-bottom:30px;color:inherit;background-color:#eee}.jumbotron h1,.jumbotron .h1{color:inherit}.jumbotron p{margin-bottom:15px;font-size:21px;font-weight:200}.jumbotron>hr{border-top-color:#d5d5d5}.container .jumbotron,.container-fluid .jumbotron{border-radius:12px}.jumbotron .container{max-width:100%}@media screen and (min-width:768px){.jumbotron{padding-top:48px;padding-bottom:48px}.container .jumbotron,.container-fluid .jumbotron{padding-left:60px;padding-right:60px}.jumbotron h1,.jumbotron .h1{font-size:63px}}.thumbnail{display:block;padding:4px;margin-bottom:20px;line-height:1.42857143;background-color:#fafafa;border:1px solid #ddd;border-radius:10px;-webkit-transition:border .2s ease-in-out;-o-transition:border .2s ease-in-out;transition:border .2s ease-in-out}.thumbnail>img,.thumbnail a>img{margin-left:auto;margin-right:auto}a.thumbnail:hover,a.thumbnail:focus,a.thumbnail.active{border-color:#2c3a80}.thumbnail .caption{padding:9px;color:#333}.alert{padding:15px;margin-bottom:20px;border:1px solid transparent;border-radius:10px}.alert h4{margin-top:0;color:inherit}.alert .alert-link{font-weight:bold}.alert>p,.alert>ul{margin-bottom:0}.alert>p+p{margin-top:5px}.alert-dismissable,.alert-dismissible{padding-right:35px}.alert-dismissable .close,.alert-dismissible .close{position:relative;top:-2px;right:-21px;color:inherit}.alert-success{background-color:#cdf8ed;border-color:#b7f5db;color:#159876}.alert-success hr{border-top-color:#a1f2cf}.alert-success .alert-link{color:#0f6b53}.alert-info{background-color:#e7eaf7;border-color:#cdd8ee;color:#2c3a80}.alert-info hr{border-top-color:#bac9e7}.alert-info .alert-link{color:#1f295a}.alert-warning{background-color:#ffe9d4;border-color:#ffd8c5;color:#fd820a}.alert-warning hr{border-top-color:#fec7ac}.alert-warning .alert-link{color:#d26902}.alert-danger{background-color:#fbd3d7;border-color:#fac5d3;color:#eb172e}.alert-danger hr{border-top-color:#f8adc2}.alert-danger .alert-link{color:#bf1023}@-webkit-keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}@-o-keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:40px 0}to{background-position:0 0}}.progress{overflow:hidden;height:20px;margin-bottom:20px;background-color:#f5f5f5;border-radius:10px;-webkit-box-shadow:inset 0 1px 2px rgba(0,0,0,0.1);box-shadow:inset 0 1px 2px rgba(0,0,0,0.1)}.progress-bar{float:left;width:0%;height:100%;font-size:12px;line-height:20px;color:#fff;text-align:center;background-color:#ffc700;-webkit-box-shadow:inset 0 -1px 0 rgba(0,0,0,0.15);box-shadow:inset 0 -1px 0 rgba(0,0,0,0.15);-webkit-transition:width .6s ease;-o-transition:width .6s ease;transition:width .6s ease}.progress-striped .progress-bar,.progress-bar-striped{background-image:-webkit-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:-o-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);-webkit-background-size:40px 40px;background-size:40px 40px}.progress.active .progress-bar,.progress-bar.active{-webkit-animation:progress-bar-stripes 2s linear infinite;-o-animation:progress-bar-stripes 2s linear infinite;animation:progress-bar-stripes 2s linear infinite}.progress-bar-success{background-color:#159876}.progress-striped .progress-bar-success{background-image:-webkit-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:-o-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent)}.progress-bar-info{background-color:#2c3a80}.progress-striped .progress-bar-info{background-image:-webkit-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:-o-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent)}.progress-bar-warning{background-color:#fd820a}.progress-striped .progress-bar-warning{background-image:-webkit-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:-o-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent)}.progress-bar-danger{background-color:#eb172e}.progress-striped .progress-bar-danger{background-image:-webkit-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:-o-linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent);background-image:linear-gradient(45deg, rgba(255,255,255,0.15) 25%, transparent 25%, transparent 50%, rgba(255,255,255,0.15) 50%, rgba(255,255,255,0.15) 75%, transparent 75%, transparent)}.media{margin-top:15px}.media:first-child{margin-top:0}.media,.media-body{zoom:1;overflow:hidden}.media-body{width:10000px}.media-object{display:block}.media-object.img-thumbnail{max-width:none}.media-right,.media>.pull-right{padding-left:10px}.media-left,.media>.pull-left{padding-right:10px}.media-left,.media-right,.media-body{display:table-cell;vertical-align:top}.media-middle{vertical-align:middle}.media-bottom{vertical-align:bottom}.media-heading{margin-top:0;margin-bottom:5px}.media-list{padding-left:0;list-style:none}.list-group{margin-bottom:20px;padding-left:0}.list-group-item{position:relative;display:block;padding:10px 15px;margin-bottom:-1px;background-color:#fff;border:1px solid #ddd}.list-group-item:first-child{border-top-right-radius:10px;border-top-left-radius:10px}.list-group-item:last-child{margin-bottom:0;border-bottom-right-radius:10px;border-bottom-left-radius:10px}a.list-group-item,button.list-group-item{color:#555}a.list-group-item .list-group-item-heading,button.list-group-item .list-group-item-heading{color:#333}a.list-group-item:hover,button.list-group-item:hover,a.list-group-item:focus,button.list-group-item:focus{text-decoration:none;color:#555;background-color:#f5f5f5}button.list-group-item{width:100%;text-align:left}.list-group-item.disabled,.list-group-item.disabled:hover,.list-group-item.disabled:focus{background-color:#eee;c