commit | 3d61e96fa2076171f016529502f7c15dec2cb4a0 | [log] [tgz] |
---|---|---|
author | Praveen Adlakha <adlakha.praveen@gmail.com> | Mon Nov 21 17:14:23 2016 +0530 |
committer | Pallavi Rao <pallavi.rao@inmobi.com> | Mon Nov 21 17:14:23 2016 +0530 |
tree | 7636d7bda9fb3cbbdfe7a8417e1040a73e3fa9d7 | |
parent | 03c4ca6fe1ebb5145ecc4f5568b8e87b818baa1a [diff] |
FALCON-2186 Rest api to get location of an extension Dev testing : bin/falcon extension -location -extensionName hdfs-mirroring file:/usr/local/falcon/falcon-0.11-SNAPSHOT/extensions/hdfs-mirroring Author: Praveen Adlakha <adlakha.praveen@gmail.com> Reviewers: @sandeepSamudrala, @pallavi-rao Closes #301 from PraveenAdlakha/2186 and squashes the following commits: de27a28 [Praveen Adlakha] comment's addressed 12ef6e9 [Praveen Adlakha] feature changed as per comment ec02d96 [Praveen Adlakha] FALCON-2186 Rest api to get location of an extension ead71da [Praveen Adlakha] comment's addressed e256118 [Praveen Adlakha] checkstyle issues fixed d5e27e5 [Praveen Adlakha] FALCON-2181 Support for storing metadata of non trusted recipe f10707f [Praveen Adlakha] comments addressed b9ee18f [Praveen Adlakha] test cases added and comments addressed 953cd4e [Praveen Adlakha] WIP decd7e4 [Praveen Adlakha] sandeep's comment addressed 9964ffc [Praveen Adlakha] FALCON-2181 Support for storing metadata of non trusted recipe
Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.
Dependencies across various data processing pipelines are not easy to establish. Gaps here typically leads to either incorrect/partial processing or expensive reprocessing. Repeated duplicate definition of a single feed multiple times can lead to inconsistencies / issues.
Input data may not arrive always on time and it is required to kick off the processing without waiting for all data to arrive and accommodate late data separately
Feed management services such as feed retention, replications across clusters, archival etc are tasks that are burdensome on individual pipeline owners and better offered as a service for all customers.
It should be easy to onboard new workflows/pipelines
Smoother integration with metastore/catalog
Provide notification to end customer based on availability of feed groups (logical group of related feeds, which are likely to be used together)
You can find the documentation on Apache Falcon website.
Before opening a pull request, please go through the Contributing to Apache Falcon wiki. It lists steps that are required before creating a PR and the conventions that we follow. If you are looking for issues to pick up then you can look at starter tasks or open tasks
You can download release notes of previous releases from the following links.