FALCON-2338 EntityGraphTest.initConfigStore fails intermittently

The error occurs only when the MetadataMappingServiceTest testsuite runs before the testcase mentioned above. This test suite creates a file in target folder 'target/store/PROCESS/sample-process.xml' which isn't deleted post the test.

The change proposed in this pull request deletes the files and the tests in EntityGraphTest suite passed.

pallavi-rao could you please check this? thanks!

Author: sonia-garudi <sgarudi@us.ibm.com>

Reviewers: @pallavi-rao

Closes #411 from sonia-garudi/FALCON-2338 and squashes the following commits:

54b15f90a [sonia-garudi] Removed unused and duplicate import statements
b35b00dcc [sonia-garudi] Edit EntityGraphTest.java to delete the config store before and after suite
1 file changed
tree: 231e91f530e53ba4b175e8b552381840b79e1438
  1. acquisition/
  2. addons/
  3. archival/
  4. build-tools/
  5. cli/
  6. client/
  7. common/
  8. common-types/
  9. distro/
  10. docs/
  11. examples/
  12. extensions/
  13. falcon-regression/
  14. falcon-ui/
  15. hadoop-dependencies/
  16. html5-ui/
  17. lifecycle/
  18. messaging/
  19. metrics/
  20. monitoring/
  21. oozie/
  22. oozie-el-extensions/
  23. prism/
  24. release-docs/
  25. replication/
  26. rerun/
  27. retention/
  28. scheduler/
  29. shell/
  30. src/
  31. test-tools/
  32. test-util/
  33. titan/
  34. unit/
  35. webapp/
  36. .gitignore
  37. .reviewboardrc
  38. CHANGES.txt
  39. falcon_merge_pr.py
  40. Installation-steps.txt
  41. LICENSE.txt
  42. NOTICE.txt
  43. pom.xml
  44. README.md

Apache Falcon

Falcon is a feed processing and feed management system aimed at making it easier for end consumers to onboard their feed processing and feed management on hadoop clusters.

Why Apache Falcon?

  • Dependencies across various data processing pipelines are not easy to establish. Gaps here typically leads to either incorrect/partial processing or expensive reprocessing. Repeated duplicate definition of a single feed multiple times can lead to inconsistencies / issues.

  • Input data may not arrive always on time and it is required to kick off the processing without waiting for all data to arrive and accommodate late data separately

  • Feed management services such as feed retention, replications across clusters, archival etc are tasks that are burdensome on individual pipeline owners and better offered as a service for all customers.

  • It should be easy to onboard new workflows/pipelines

  • Smoother integration with metastore/catalog

  • Provide notification to end customer based on availability of feed groups (logical group of related feeds, which are likely to be used together)

Online Documentation

You can find the documentation on Apache Falcon website.

How to Contribute

Before opening a pull request, please go through the Contributing to Apache Falcon wiki. It lists steps that are required before creating a PR and the conventions that we follow. If you are looking for issues to pick up then you can look at starter tasks or open tasks

Release Notes

You can download release notes of previous releases from the following links.