IMPALA-6917: Limit impalad mem-limit to 12GB.

This changes the memlimit choosing code in start-impala-cluster to have
a maximum of 12GB. In 68GB machines, this has the effect of reducing the
memlimit from ~15.8GB to 12GB. On machines with less than 51.4 GB of
RAM, this has no effect.

I'm tinkering with this threshold because ASAN builds sometimes
die at the hands of the OOM killer on m2.4xlarge machines (68GB).
My theory for why it's only sometimes is that during the parallel
tests, our memory usage could vary widely depending on which tests
and queries execute.

End-users don't see this code; this is only used by our minicluster
tests to test Impala.

I have run the ASAN build with this change successfully, though I've
found this particular OOM seems to come and go.

Change-Id: I8024414c5c23bb42cce912d8f34cd0b787e0e39a
Reviewed-on: http://gerrit.cloudera.org:8080/10051
Reviewed-by: Philip Zeyliger <philip@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
diff --git a/bin/start-impala-cluster.py b/bin/start-impala-cluster.py
index bfe71d5..e870852 100755
--- a/bin/start-impala-cluster.py
+++ b/bin/start-impala-cluster.py
@@ -216,15 +216,21 @@
     # No impalad instances should be started.
     return
 
+  # Set mem_limit of each impalad to the smaller of 12GB or
+  # 1/cluster_size (typically 1/3) of 70% of system memory.
+  #
   # The default memory limit for an impalad is 80% of the total system memory. On a
   # mini-cluster with 3 impalads that means 240%. Since having an impalad be OOM killed
   # is very annoying, the mem limit will be reduced. This can be overridden using the
   # --impalad_args flag. virtual_memory().total returns the total physical memory.
   # The exact ratio to use is somewhat arbitrary. Peak memory usage during
   # tests depends on the concurrency of parallel tests as well as their ordering.
-  # At a ratio of 0.8, on 8-core, 68GB machines, ASAN builds can trigger the OOM
-  # killer, so this ratio is currently set to 0.7.
+  # On the other hand, to avoid using too much memory, we limit the
+  # memory choice here to max out at 12GB. This should be sufficient for tests.
+  #
+  # Beware that ASAN builds use more memory than regular builds.
   mem_limit = int(0.7 * psutil.virtual_memory().total / cluster_size)
+  mem_limit = min(12 * 1024 * 1024 * 1024, mem_limit)
 
   delay_list = []
   if options.catalog_init_delays != "":