KUDU-3568 Fix compaction budgeting test by setting memory hard limit

TestRowSetCompactionSkipWithBudgetingConstraints can fail if the
memory on node running the test is high. It happens because the test
generates deltas of size worth a few MBs that is multiplied with a
preset factor to ensure the result (i.e. memory required for rowset
compaction completion) is of high value of the order of 200 GB per
rowset.

Even though nodes running the test generally don't have so much
physical memory, it is still possible to end up with high memory nodes.
On such nodes, the test might fail.

The patch fixes that problem by deterministically ensuring that
compaction memory requirement is always higher than the memory hard
limit. It does that by doing the following:
1. Move out the budgeting compaction tests out in a separate binary.
2. This gives flexibility to set the memory hard limit as per test
   needs. It is important to node that once a memory hard limit is
   set, it remains the same for all tests executed through
   binary lifecycle.
3. Set the hard memory limit to 1 GB which is enough to handle compaction
   requirements for TestRowSetCompactionProceedWithNoBudgetingConstraints.
   For TestRowSetCompactionSkipWithBudgetingConstraints, it is not
   enough because we set the delta memory factor high to exceed 1 GB.
   Both the test are now expected to succeed deterministically.

Change-Id: I85d104e1d066507ce8e72a00cc5165cc4b85e48d
Reviewed-on: http://gerrit.cloudera.org:8080/21416
Tested-by: Alexey Serbin <alexey@apache.org>
Reviewed-by: Alexey Serbin <alexey@apache.org>
3 files changed