commit | 470f34313b0bf8210f95f6969712b12e55a9e169 | [log] [tgz] |
---|---|---|
author | sandeep <sandysmdl@gmail.com> | Wed Aug 24 13:08:04 2016 +0530 |
committer | Pallavi Rao <pallavi.rao@inmobi.com> | Wed Aug 24 13:08:04 2016 +0530 |
tree | a501bc4cd24179c5859ef253a3585947ac90dd19 | |
parent | fc27ebb84b692a7d6e961b1a1e00821ad2a3df51 [diff] |
FALCON-2113 Falcon retry happens in few cases inspite of a manual kill from the user. move it to a different state(ignore) and check for that state before retrying to stop it from retrying. Author: sandeep <sandysmdl@gmail.com> Reviewers: @pallavi-rao, @PraveenAdlakha Closes #265 from sandeepSamudrala/FALCON-2113 and squashes the following commits: 2a00bef [sandeep] fixing failures 285c796 [sandeep] Merge branch 'master' of https://github.com/apache/falcon into FALCON-2113 4f585a1 [sandeep] Incorporated review comments. Removed parent Id from passing into method to check for manual kill 2d0d9a3 [sandeep] FALCON-2113. Falcon retry happens in few cases inspite of a manual kill from the user. move it to a different state(ignore) and check for that state before retrying to stop it from retrying. 1bb8d3c [sandeep] Merge branch 'master' of https://github.com/apache/falcon c065566 [sandeep] reverting last line changes made 1a4dcd2 [sandeep] rebased and resolved the conflicts from master 271318b [sandeep] FALCON-2097. Adding UT to the new method for getting next instance time with Delay. a94d4fe [sandeep] rebasing from master 9e68a57 [sandeep] FALCON-298. Feed update with replication delay creates holes
Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.
Dependencies across various data processing pipelines are not easy to establish. Gaps here typically leads to either incorrect/partial processing or expensive reprocessing. Repeated duplicate definition of a single feed multiple times can lead to inconsistencies / issues.
Input data may not arrive always on time and it is required to kick off the processing without waiting for all data to arrive and accommodate late data separately
Feed management services such as feed retention, replications across clusters, archival etc are tasks that are burdensome on individual pipeline owners and better offered as a service for all customers.
It should be easy to onboard new workflows/pipelines
Smoother integration with metastore/catalog
Provide notification to end customer based on availability of feed groups (logical group of related feeds, which are likely to be used together)
You can find the documentation on Apache Falcon website.
Before opening a pull request, please go through the Contributing to Apache Falcon wiki. It lists steps that are required before creating a PR and the conventions that we follow. If you are looking for issues to pick up then you can look at starter tasks or open tasks
You can download release notes of previous releases from the following links.