commit | 20eb735da21a29dd8a21744fdb8097479637a962 | [log] [tgz] |
---|---|---|
author | sonia-garudi <sgarudi@us.ibm.com> | Thu Aug 09 15:56:33 2018 +0530 |
committer | pallavi-rao <pallavi.rao@inmobi.com> | Thu Aug 09 15:56:33 2018 +0530 |
tree | 231e91f530e53ba4b175e8b552381840b79e1438 | |
parent | 39f64a08e4b1ef0fbd33f4eb1ae2d69ad408d12b [diff] |
FALCON-2338 EntityGraphTest.initConfigStore fails intermittently The error occurs only when the MetadataMappingServiceTest testsuite runs before the testcase mentioned above. This test suite creates a file in target folder 'target/store/PROCESS/sample-process.xml' which isn't deleted post the test. The change proposed in this pull request deletes the files and the tests in EntityGraphTest suite passed. pallavi-rao could you please check this? thanks! Author: sonia-garudi <sgarudi@us.ibm.com> Reviewers: @pallavi-rao Closes #411 from sonia-garudi/FALCON-2338 and squashes the following commits: 54b15f90a [sonia-garudi] Removed unused and duplicate import statements b35b00dcc [sonia-garudi] Edit EntityGraphTest.java to delete the config store before and after suite
Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.
Dependencies across various data processing pipelines are not easy to establish. Gaps here typically leads to either incorrect/partial processing or expensive reprocessing. Repeated duplicate definition of a single feed multiple times can lead to inconsistencies / issues.
Input data may not arrive always on time and it is required to kick off the processing without waiting for all data to arrive and accommodate late data separately
Feed management services such as feed retention, replications across clusters, archival etc are tasks that are burdensome on individual pipeline owners and better offered as a service for all customers.
It should be easy to onboard new workflows/pipelines
Smoother integration with metastore/catalog
Provide notification to end customer based on availability of feed groups (logical group of related feeds, which are likely to be used together)
You can find the documentation on Apache Falcon website.
Before opening a pull request, please go through the Contributing to Apache Falcon wiki. It lists steps that are required before creating a PR and the conventions that we follow. If you are looking for issues to pick up then you can look at starter tasks or open tasks
You can download release notes of previous releases from the following links.