KUDU-3108: fix invalid memory accesses in merge iterator

CONFLICT: updated diff_scan-test to use MVCC's
WaitForApplyingTransactionsToCommit(), which was renamed in later
versions.

The 'hotmaxes_' heap and 'hot_' heap in the MergeIterator are meant to
represent the same set of iterator states currently deemed "hot" in the
optimal merge algorithm[1]. They used different ordering constraints to
allow constant time access to the smallest last row and the smallest
next row across all hot states respectively. However, we were using
pop() on both heaps when iterator states were no longer deemed hot,
incorrectly expecting that the call would remove the same iterator state
from both heaps.

In the case that this pop() was followed by the destruction of the
sub-iterator (e.g. if the iterator was fully exhausted), this would
leave an iterator state in the heaps that pointed at destructed state:

F1030 21:16:58.411253 40800 schema.h:706] Check failed: KeyEquals(*lhs.schema()) && KeyEquals(*rhs.schema())
*** Check failure stack trace: ***
*** Aborted at 1604117818 (unix time) try "date -d @1604117818" if you are using GNU date ***
PC: @     0x7f701fcf11d7 __GI_raise
*** SIGABRT (@0x111700009efd) received by PID 40701 (TID 0x7f6ff0f47700) from PID 40701; stack trace: ***
    @     0x7f7026a70370 (unknown)
    @     0x7f701fcf11d7 __GI_raise
    @     0x7f701fcf28c8 __GI_abort
    @     0x7f70224377b9 google::logging_fail()
    @     0x7f7022438f8d google::LogMessage::Fail()
    @     0x7f702243aee3 google::LogMessage::SendToLog()
    @     0x7f7022438ae9 google::LogMessage::Flush()
    @     0x7f702243b86f google::LogMessageFatal::~LogMessageFatal()
    @     0x7f702cc99fbc kudu::Schema::Compare<>()
    @     0x7f7026167cfd kudu::MergeIterator::RefillHotHeap()
    @     0x7f7026167357 kudu::MergeIterator::AdvanceAndReheap()
    @     0x7f7026169617 kudu::MergeIterator::MaterializeOneRow()
    @     0x7f70261688e9 kudu::MergeIterator::NextBlock()
    @     0x7f702cbddd9b kudu::tablet::Tablet::Iterator::NextBlock()
    @     0x7f70317bcab3 kudu::tserver::TabletServiceImpl::HandleContinueScanRequest()
    @     0x7f70317bb857 kudu::tserver::TabletServiceImpl::HandleNewScanRequest()
    @     0x7f70317b464e kudu::tserver::TabletServiceImpl::Scan()
    @     0x7f702ddfd762 _ZZN4kudu7tserver21TabletServerServiceIfC1ERK13scoped_refptrINS_12MetricEntityEERKS2_INS_3rpc13ResultTrackerEEENKUlPKN6google8protobuf7MessageEPSE_PNS7_10RpcContextEE4_clESG_SH_SJ_
    @     0x7f702de0064d _ZNSt17_Function_handlerIFvPKN6google8protobuf7MessageEPS2_PN4kudu3rpc10RpcContextEEZNS6_7tserver21TabletServerServiceIfC1ERK13scoped_refptrINS6_12MetricEntityEERKSD_INS7_13ResultTrackerEEEUlS4_S5_S9_E4_E9_M_invokeERKSt9_Any_dataS4_S5_S9_
    @     0x7f702b4ddcc2 std::function<>::operator()()
    @     0x7f702b4dd6ed kudu::rpc::GeneratedServiceIf::Handle()
    @     0x7f702b4dfff8 kudu::rpc::ServicePool::RunThread()
    @     0x7f702b4de8c5 _ZZN4kudu3rpc11ServicePool4InitEiENKUlvE_clEv
    @     0x7f702b4e0337 _ZNSt17_Function_handlerIFvvEZN4kudu3rpc11ServicePool4InitEiEUlvE_E9_M_invokeERKSt9_Any_data
    @     0x7f7033524b9c std::function<>::operator()()
    @     0x7f70248227e0 kudu::Thread::SuperviseThread()
    @     0x7f7026a68dc5 start_thread
    @     0x7f701fdb376d __clone
Aborted

This patch removes the 'hotmaxes_' min heap in favor of a two-heap
variant of the merge algorithm that Adar described to me that uses the
last value in the top iterator in the hot heap. This is not as optimal
in terms of merge window size, but is still correct and avoids this bug.

I experimented with some other approaches, described below. I ran the
same test used in 1567dec086 (generic_iterators-test TestMerge and
TestMergeNonOverlapping with 10000 rows per list), averaged over five
runs:
 a: An iteration of this patch that used std::set for hotmaxes, a call
    to find() before calling Advance(), and a position-based erase()
    after.  The find() allowed for an erase() call that did not rely at
    all on the comparator, which may have pointed at destructed state
    following the call to Advance().
 b: An iteration of this patch that used std::set for hotmaxes, a call
    to a value-based erase() before calling Advance(). The call to
    erase() before Advance() ensured Advance() calls never interfered
    with our ability to compare and erase from hotmaxes, at the
    potential cost of an extra insert if the iterator were still hot.
 c: The original version that uses heap::pop() after calling Advance().
 d: This patch, that doesn't use hotmaxes, and instead uses the last row
    of the top value in the hot heap to define the upper bound of the
    merge window.

Parameters                  | a        | b        | c        | d
----------------------------+----------+----------+----------+---------
overlapping, 10 lists       | 0.059s   | 0.0744s  | 0.0472s  | 0.0478s
overlapping, 100 lists      | 0.6726s  | 0.8876s  | 0.4938s  | 0.491s
overlapping, 1000 lists     | 15.5588s | 18.87s   | 10.3554s | 10.157s
non-overlapping, 10 lists   | 0.011s   | 0.0114s  | 0.0106s  | 0.0092s
non-overlapping, 100 lists  | 0.0786s  | 0.0794s  | 0.083s   | 0.0682s
non-overlapping, 1000 lists | 0.7824s  | 0.7346s  | 0.7174s  | 0.6884s

I also ran an ordered scan with `kudu perf tablet_scan` on a 65GiB
tablet hosted on a single disk with 1667 rowsets and an average rowset
height of five, averaged over six runs each (I omitted testing version b
since it was the worst-performing of the above):

Results   | a        | c        | d
----------+----------+----------+----------
real time | 1070.69s | 1166.96s | 1048.57s
stdev     | 19.48s   | 25.14s   | 19.54s

I didn't profile these runs in depth, but the measurements suggest that
the maintenance of the hotmaxes set may add overhead that isn't always
recouped by an optimally-sized merge window. I left a TODO to experiment
further with hotmaxes of different data structures (e.g. absl::btree
seems like a good candidate).

To exercise these codepaths more rigorously, I bumped fuzz-itest's
default keyspace size to 5. Some bugs (this one included) can only be
created when there are a mix of overlapping and non-overlapping rowsets,
which is impossible to achieve with the current keyspace size of 2.

[1] https://docs.google.com/document/d/1uP0ubjM6ulnKVCRrXtwT_dqrTWjF9tlFSRk0JN2e_O0/edit#

Change-Id: I8ec1cd3fd67ec4ea92a55b5b0ce555123748824d
Reviewed-on: http://gerrit.cloudera.org:8080/16777
Tested-by: Kudu Jenkins
Reviewed-by: Alexey Serbin <aserbin@cloudera.com>
(cherry picked from commit 6f807b136dfc072b019bdb4a5f1719603096898f)
Reviewed-on: http://gerrit.cloudera.org:8080/16797
Reviewed-by: Andrew Wong <awong@cloudera.com>
3 files changed