| Spark Change Log |
| ---------------- |
| |
| Release 1.5.1 |
| |
| [SPARK-10692] [STREAMING] Expose failureReasons in BatchInfo for streaming UI to clear failed batches |
| zsxwing <zsxwing@gmail.com>, Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-09-23 19:52:02 -0700 |
| Commit: 4c48593, github.com/apache/spark/pull/8892 |
| |
| Update branch-1.5 for 1.5.1 release. |
| Reynold Xin <rxin@databricks.com> |
| 2015-09-23 19:46:13 -0700 |
| Commit: 1000b5d, github.com/apache/spark/pull/8890 |
| |
| [SPARK-10474] [SQL] Aggregation fails to allocate memory for pointer array (round 2) |
| Andrew Or <andrew@databricks.com> |
| 2015-09-23 19:34:31 -0700 |
| Commit: 1f47e68, github.com/apache/spark/pull/8888 |
| |
| [SPARK-10731] [SQL] Delegate to Scala's DataFrame.take implementation in Python DataFrame. |
| Reynold Xin <rxin@databricks.com> |
| 2015-09-23 16:43:21 -0700 |
| Commit: 7564c24, github.com/apache/spark/pull/8876 |
| |
| [SPARK-10403] Allow UnsafeRowSerializer to work with tungsten-sort ShuffleManager |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-09-23 11:31:01 -0700 |
| Commit: 64cc62c, github.com/apache/spark/pull/8873 |
| |
| [SPARK-9710] [TEST] Fix RPackageUtilsSuite when R is not available. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-10 10:10:40 -0700 |
| Commit: 6c6cadb, github.com/apache/spark/pull/8008 |
| |
| [SPARK-10769] [STREAMING] [TESTS] Fix o.a.s.streaming.CheckpointSuite.maintains rate controller |
| zsxwing <zsxwing@gmail.com> |
| 2015-09-23 01:29:30 -0700 |
| Commit: 4174b94, github.com/apache/spark/pull/8877 |
| |
| [SPARK-10224] [STREAMING] Fix the issue that blockIntervalTimer won't call updateCurrentBuffer when stopping |
| zsxwing <zsxwing@gmail.com> |
| 2015-09-23 01:28:02 -0700 |
| Commit: 6a616d0, github.com/apache/spark/pull/8417 |
| |
| [SPARK-10652] [SPARK-10742] [STREAMING] Set meaningful job descriptions for all streaming jobs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-09-22 22:44:09 -0700 |
| Commit: 8a23ef5, github.com/apache/spark/pull/8791 |
| |
| [SPARK-10663] Removed unnecessary invocation of DataFrame.toDF method. |
| Matt Hagen <anonz3000@gmail.com> |
| 2015-09-22 21:14:25 -0700 |
| Commit: 7f07cc6, github.com/apache/spark/pull/8875 |
| |
| [SPARK-10310] [SQL] Fixes script transformation field/line delimiters |
| Cheng Lian <lian@databricks.com> |
| 2015-09-22 19:41:57 -0700 |
| Commit: 73d0621, github.com/apache/spark/pull/8860 |
| |
| [SPARK-10640] History server fails to parse TaskCommitDenied |
| Andrew Or <andrew@databricks.com> |
| 2015-09-22 16:35:43 -0700 |
| Commit: 26187ab, github.com/apache/spark/pull/8828 |
| |
| Revert "[SPARK-10640] History server fails to parse TaskCommitDenied" |
| Andrew Or <andrew@databricks.com> |
| 2015-09-22 17:10:58 -0700 |
| Commit: 118ebd4 |
| |
| [SPARK-10640] History server fails to parse TaskCommitDenied |
| Andrew Or <andrew@databricks.com> |
| 2015-09-22 16:35:43 -0700 |
| Commit: 5ffd084, github.com/apache/spark/pull/8828 |
| |
| [SPARK-10714] [SPARK-8632] [SPARK-10685] [SQL] Refactor Python UDF handling |
| Reynold Xin <rxin@databricks.com> |
| 2015-09-22 14:11:46 -0700 |
| Commit: 3339916, github.com/apache/spark/pull/8835 |
| |
| [SPARK-10737] [SQL] When using UnsafeRows, SortMergeJoin may return wrong results |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-22 13:31:35 -0700 |
| Commit: 6b1e5c2, github.com/apache/spark/pull/8854 |
| |
| [SPARK-10672] [SQL] Do not fail when we cannot save the metadata of a data source table in a hive compatible way |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-22 13:29:39 -0700 |
| Commit: d83dcc9, github.com/apache/spark/pull/8824 |
| |
| [SPARK-10740] [SQL] handle nondeterministic expressions correctly for set operations |
| Wenchen Fan <cloud0fan@163.com> |
| 2015-09-22 12:14:15 -0700 |
| Commit: 54334d3, github.com/apache/spark/pull/8858 |
| |
| [SPARK-10593] [SQL] fix resolve output of Generate |
| Davies Liu <davies@databricks.com> |
| 2015-09-22 11:07:01 -0700 |
| Commit: c3112a9, github.com/apache/spark/pull/8755 |
| |
| [SPARK-10695] [DOCUMENTATION] [MESOS] Fixing incorrect value informatiā¦ |
| Akash Mishra <akash.mishra20@gmail.com> |
| 2015-09-22 00:14:27 -0700 |
| Commit: 646155e, github.com/apache/spark/pull/8816 |
| |
| [SQL] [MINOR] map -> foreach. |
| Reynold Xin <rxin@databricks.com> |
| 2015-09-22 00:09:29 -0700 |
| Commit: a2b0fee, github.com/apache/spark/pull/8862 |
| |
| [SPARK-8567] [SQL] Increase the timeout of o.a.s.sql.hive.HiveSparkSubmitSuite to 5 minutes. |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-22 00:07:30 -0700 |
| Commit: 03215e3, github.com/apache/spark/pull/8850 |
| |
| [SPARK-10649] [STREAMING] Prevent inheriting job group and irrelevant job description in streaming jobs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-09-22 00:06:18 -0700 |
| Commit: d0e6e53, github.com/apache/spark/pull/8856 |
| |
| [SPARK-10716] [BUILD] spark-1.5.0-bin-hadoop2.6.tgz file doesn't uncompress on OS X due to hidden file |
| Sean Owen <sowen@cloudera.com> |
| 2015-09-21 23:29:59 -0700 |
| Commit: f83b6e6, github.com/apache/spark/pull/8846 |
| |
| [SPARK-10711] [SPARKR] Do not assume spark.submit.deployMode is always set |
| Hossein <hossein@databricks.com> |
| 2015-09-21 21:09:59 -0700 |
| Commit: bb8e481, github.com/apache/spark/pull/8832 |
| |
| [SPARK-10495] [SQL] [BRANCH-1.5] Fix build. |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-21 21:05:51 -0700 |
| Commit: 86f9a35, github.com/apache/spark/pull/8861 |
| |
| [DOC] [PYSPARK] [MLLIB] Added newlines to docstrings to fix parameter formatting (1.5 backport) |
| noelsmith <mail@noelsmith.com> |
| 2015-09-21 18:27:57 -0700 |
| Commit: ed74d30, github.com/apache/spark/pull/8855 |
| |
| [SPARK-10495] [SQL] Read date values in JSON data stored by Spark 1.5.0. |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-21 18:06:45 -0700 |
| Commit: 7ab4d17, github.com/apache/spark/pull/8806 |
| |
| [SPARK-10676] [DOCS] Add documentation for SASL encryption options. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-09-21 13:15:44 -0700 |
| Commit: 6152099, github.com/apache/spark/pull/8803 |
| |
| [SPARK-10155] [SQL] Change SqlParser to object to avoid memory leak |
| zsxwing <zsxwing@gmail.com> |
| 2015-09-19 18:22:43 -0700 |
| Commit: 2591419, github.com/apache/spark/pull/8357 |
| |
| Fixed links to the API |
| Alexis Seigneurin <alexis.seigneurin@gmail.com> |
| 2015-09-19 12:01:22 +0100 |
| Commit: 9b74fec, github.com/apache/spark/pull/8838 |
| |
| [SPARK-10584] [SQL] [DOC] Documentation about the compatible Hive version is wrong. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-09-19 01:59:36 -0700 |
| Commit: aaae67d, github.com/apache/spark/pull/8776 |
| |
| [SPARK-10474] [SQL] Aggregation fails to allocate memory for pointer array |
| Andrew Or <andrew@databricks.com> |
| 2015-09-18 23:58:25 -0700 |
| Commit: 49355d0, github.com/apache/spark/pull/8827 |
| |
| [SPARK-10623] [SQL] Fixes ORC predicate push-down |
| Cheng Lian <lian@databricks.com> |
| 2015-09-18 18:42:20 -0700 |
| Commit: b3f1e65, github.com/apache/spark/pull/8799 |
| |
| [SPARK-10611] Clone Configuration for each task for NewHadoopRDD |
| Mingyu Kim <mkim@palantir.com> |
| 2015-09-18 15:40:58 -0700 |
| Commit: a6c3153, github.com/apache/spark/pull/8763 |
| |
| [SPARK-10449] [SQL] Don't merge decimal types with incompatable precision or scales |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-09-18 13:47:14 -0700 |
| Commit: 4051fff, github.com/apache/spark/pull/8634 |
| |
| [SPARK-10539] [SQL] Project should not be pushed down through Intersect or Except #8742 |
| Yijie Shen <henry.yijieshen@gmail.com>, Yin Huai <yhuai@databricks.com> |
| 2015-09-18 13:20:13 -0700 |
| Commit: 3df52cc, github.com/apache/spark/pull/8823 |
| |
| [SPARK-10540] Fixes flaky all-data-type test |
| Cheng Lian <lian@databricks.com> |
| 2015-09-18 12:19:08 -0700 |
| Commit: e1e781f, github.com/apache/spark/pull/8768 |
| |
| [SPARK-10684] [SQL] StructType.interpretedOrdering need not to be serialized |
| navis.ryu <navis@apache.org> |
| 2015-09-18 00:43:02 -0700 |
| Commit: 2c6a51e, github.com/apache/spark/pull/8808 |
| |
| docs/running-on-mesos.md: state default values in default column |
| Felix Bechstein <felix.bechstein@otto.de> |
| 2015-09-17 22:42:46 -0700 |
| Commit: f97db94, github.com/apache/spark/pull/8810 |
| |
| [SPARK-9522] [SQL] SparkSubmit process can not exit if kill application when HiveThriftServer was starting |
| linweizhong <linweizhong@huawei.com> |
| 2015-09-17 22:25:24 -0700 |
| Commit: dc5ae03, github.com/apache/spark/pull/7853 |
| |
| [SPARK-10657] Remove SCP-based Jenkins log archiving |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-09-17 11:40:24 -0700 |
| Commit: 153a23a, github.com/apache/spark/pull/8793 |
| |
| [SPARK-10639] [SQL] Need to convert UDAF's result from scala to sql type |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-17 11:14:52 -0700 |
| Commit: 464d6e7, github.com/apache/spark/pull/8788 |
| |
| [SPARK-10650] Clean before building docs |
| Michael Armbrust <michael@databricks.com> |
| 2015-09-17 11:05:30 -0700 |
| Commit: fd58ed4, github.com/apache/spark/pull/8787 |
| |
| [SPARK-10172] [CORE] disable sort in HistoryServer webUI |
| Josiah Samuel <josiah_sams@in.ibm.com> |
| 2015-09-17 10:18:21 -0700 |
| Commit: 88176d1, github.com/apache/spark/pull/8506 |
| |
| [SPARK-10642] [PYSPARK] Fix crash when calling rdd.lookup() on tuple keys |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-09-17 10:02:15 -0700 |
| Commit: 9f8fb33, github.com/apache/spark/pull/8796 |
| |
| [SPARK-10660] Doc describe error in the "Running Spark on YARN" page |
| yangping.wu <wyphao.2007@163.com> |
| 2015-09-17 09:52:40 -0700 |
| Commit: eae1566, github.com/apache/spark/pull/8797 |
| |
| [SPARK-10511] [BUILD] Reset git repository before packaging source distro |
| Luciano Resende <lresende@apache.org> |
| 2015-09-16 10:47:30 +0100 |
| Commit: 4c4a9ba, github.com/apache/spark/pull/8774 |
| |
| [SPARK-10381] Fix mixup of taskAttemptNumber & attemptId in OutputCommitCoordinator |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-09-15 17:11:21 -0700 |
| Commit: 2bbcbc6, github.com/apache/spark/pull/8544 |
| |
| [SPARK-10548] [SPARK-10563] [SQL] Fix concurrent SQL executions / branch-1.5 |
| Andrew Or <andrew@databricks.com> |
| 2015-09-15 16:46:34 -0700 |
| Commit: 997be78, github.com/apache/spark/pull/8721 |
| |
| Small fixes to docs |
| Jacek Laskowski <jacek.laskowski@deepsense.io> |
| 2015-09-14 23:40:29 -0700 |
| Commit: 7286c2b, github.com/apache/spark/pull/8759 |
| |
| [SPARK-10542] [PYSPARK] fix serialize namedtuple |
| Davies Liu <davies@databricks.com> |
| 2015-09-14 19:46:34 -0700 |
| Commit: d5c0361, github.com/apache/spark/pull/8707 |
| |
| [SPARK-10564] ThreadingSuite: assertion failures in threads don't fail the test (round 2) |
| Andrew Or <andrew@databricks.com> |
| 2015-09-14 15:09:43 -0700 |
| Commit: 5db51f9, github.com/apache/spark/pull/8727 |
| |
| [SPARK-10543] [CORE] Peak Execution Memory Quantile should be Per-task Basis |
| Forest Fang <forest.fang@outlook.com> |
| 2015-09-14 15:07:13 -0700 |
| Commit: eb0cb25, github.com/apache/spark/pull/8726 |
| |
| [SPARK-10549] scala 2.11 spark on yarn with security - Repl doesn't work |
| Tom Graves <tgraves@yahoo-inc.com>, Thomas Graves <tgraves@staydecay.corp.gq1.yahoo.com> |
| 2015-09-14 15:05:19 -0700 |
| Commit: 0e1c9d9, github.com/apache/spark/pull/8719 |
| |
| [SPARK-10522] [SQL] Nanoseconds of Timestamp in Parquet should be positive |
| Davies Liu <davies@databricks.com> |
| 2015-09-14 14:10:54 -0700 |
| Commit: a0d564a, github.com/apache/spark/pull/8674 |
| |
| [SPARK-10573] [ML] IndexToString output schema should be StringType |
| Nick Pritchard <nicholas.pritchard@falkonry.com> |
| 2015-09-14 13:27:45 -0700 |
| Commit: 5b7067c, github.com/apache/spark/pull/8751 |
| |
| [SPARK-10584] [DOC] [SQL] Documentation about spark.sql.hive.metastore.version is wrong. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-09-14 12:06:23 -0700 |
| Commit: 5f58704, github.com/apache/spark/pull/8739 |
| |
| [SPARK-6350] [MESOS] [BACKPORT] Fine-grained mode scheduler respects |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-09-13 11:00:08 +0100 |
| Commit: 4586f21, github.com/apache/spark/pull/8732 |
| |
| [SPARK-10554] [CORE] Fix NPE with ShutdownHook |
| Nithin Asokan <Nithin.Asokan@Cerner.com> |
| 2015-09-12 09:50:49 +0100 |
| Commit: f8909a6, github.com/apache/spark/pull/8720 |
| |
| [SPARK-10566] [CORE] SnappyCompressionCodec init exception handling masks important error information |
| Daniel Imfeld <daniel@danielimfeld.com> |
| 2015-09-12 09:19:59 +0100 |
| Commit: 5bf403c, github.com/apache/spark/pull/8725 |
| |
| [SPARK-10564] ThreadingSuite: assertion failures in threads don't fail the test |
| Andrew Or <andrew@databricks.com> |
| 2015-09-11 15:02:59 -0700 |
| Commit: fcb2438, github.com/apache/spark/pull/8723 |
| |
| [SPARK-9924] [WEB UI] Don't schedule checkForLogs while some of them ā¦ |
| Rohit Agarwal <rohita@qubole.com> |
| 2015-09-11 10:03:39 -0700 |
| Commit: 7f10bd6, github.com/apache/spark/pull/8701 |
| |
| [SPARK-10540] [SQL] Ignore HadoopFsRelationTest's "test all data types" if it is too flaky |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-11 09:42:53 -0700 |
| Commit: 295281f, github.com/apache/spark/pull/8705 |
| |
| [SPARK-10556] Remove explicit Scala version for sbt project build files |
| Ahir Reddy <ahirreddy@gmail.com> |
| 2015-09-11 13:06:14 +0100 |
| Commit: 4af9256, github.com/apache/spark/pull/8709 |
| |
| Revert "[SPARK-6350] [MESOS] Fine-grained mode scheduler respects mesosExecutor.cores" |
| Andrew Or <andrew@databricks.com> |
| 2015-09-10 14:35:52 -0700 |
| Commit: 89d351b |
| |
| [SPARK-6350] [MESOS] Fine-grained mode scheduler respects mesosExecutor.cores |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-09-10 12:00:21 -0700 |
| Commit: 8cf1619, github.com/apache/spark/pull/8653 |
| |
| [SPARK-10469] [DOC] Try and document the three options |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-09-10 11:49:53 -0700 |
| Commit: bff05aa, github.com/apache/spark/pull/8638 |
| |
| [SPARK-10466] [SQL] UnsafeRow SerDe exception with data spill |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-09-10 11:48:43 -0700 |
| Commit: bc70043, github.com/apache/spark/pull/8635 |
| |
| [MINOR] [MLLIB] [ML] [DOC] fixed typo: label for negative result should be 0.0 (original: 1.0) |
| Sean Paradiso <seanparadiso@gmail.com> |
| 2015-09-09 22:09:33 -0700 |
| Commit: 5e06d41, github.com/apache/spark/pull/8680 |
| |
| [SPARK-7736] [CORE] Fix a race introduced in PythonRunner. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-18 11:36:36 -0700 |
| Commit: d6cd356, github.com/apache/spark/pull/8258 |
| |
| [SPARK-7736] [CORE] [YARN] Make pyspark fail YARN app on failure. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-17 10:34:22 -0700 |
| Commit: a150625, github.com/apache/spark/pull/7751 |
| |
| [SPARK-10071] [STREAMING] Output a warning when writing QueueInputDStream and throw a better exception when reading QueueInputDStream |
| zsxwing <zsxwing@gmail.com> |
| 2015-09-08 20:39:15 -0700 |
| Commit: d4b00c5, github.com/apache/spark/pull/8624 |
| |
| [SPARK-10301] [SPARK-10428] [SQL] [BRANCH-1.5] Fixes schema merging for nested structs |
| Cheng Lian <lian@databricks.com> |
| 2015-09-08 20:30:24 -0700 |
| Commit: fca16c5, github.com/apache/spark/pull/8583 |
| |
| [SPARK-10492] [STREAMING] [DOCUMENTATION] Update Streaming documentation about rate limiting and backpressure |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-09-08 14:54:43 -0700 |
| Commit: 63c72b9, github.com/apache/spark/pull/8656 |
| |
| [SPARK-10441] [SQL] [BRANCH-1.5] Save data correctly to json. |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-08 14:20:35 -0700 |
| Commit: 7fd4674, github.com/apache/spark/pull/8655 |
| |
| [SPARK-10470] [ML] ml.IsotonicRegressionModel.copy should set parent |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-09-08 12:48:21 -0700 |
| Commit: 34d417e, github.com/apache/spark/pull/8637 |
| |
| Docs small fixes |
| Jacek Laskowski <jacek@japila.pl> |
| 2015-09-08 14:38:10 +0100 |
| Commit: 88a07d8, github.com/apache/spark/pull/8629 |
| |
| [DOC] Added R to the list of languages with "high-level API" support in theā¦ |
| Stephen Hopper <shopper@shopper-osx.local> |
| 2015-09-08 14:36:34 +0100 |
| Commit: 37c5edf, github.com/apache/spark/pull/8646 |
| |
| [SPARK-10434] [SQL] Fixes Parquet schema of arrays that may contain null |
| Cheng Lian <lian@databricks.com> |
| 2015-09-05 17:50:12 +0800 |
| Commit: 640000b, github.com/apache/spark/pull/8586 |
| |
| [SPARK-10440] [STREAMING] [DOCS] Update python API stuff in the programming guides and python docs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-09-04 23:16:39 -1000 |
| Commit: ec750a7, github.com/apache/spark/pull/8595 |
| |
| [SPARK-10402] [DOCS] [ML] Add defaults to the scaladoc for params in ml/ |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-09-04 17:32:35 -0700 |
| Commit: cfc5f6f, github.com/apache/spark/pull/8591 |
| |
| [SPARK-10311] [STREAMING] Reload appId and attemptId when app starts with checkpoint file in cluster mode |
| xutingjun <xutingjun@huawei.com> |
| 2015-09-04 15:40:02 -0700 |
| Commit: dc39658, github.com/apache/spark/pull/8477 |
| |
| [SPARK-10454] [SPARK CORE] wait for empty event queue |
| robbins <robbins@uk.ibm.com> |
| 2015-09-04 15:23:29 -0700 |
| Commit: 09e08db, github.com/apache/spark/pull/8605 |
| |
| [SPARK-10431] [CORE] Fix intermittent test failure. Wait for event queue to be clear |
| robbins <robbins@uk.ibm.com> |
| 2015-09-03 13:47:22 -0700 |
| Commit: 4d63335, github.com/apache/spark/pull/8582 |
| |
| [SPARK-9869] [STREAMING] Wait for all event notifications before asserting results |
| robbins <robbins@uk.ibm.com> |
| 2015-09-03 13:48:35 -0700 |
| Commit: f945b64, github.com/apache/spark/pull/8589 |
| |
| [SPARK-10332] [CORE] Fix yarn spark executor validation |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-09-03 09:30:54 +0100 |
| Commit: f01a967, github.com/apache/spark/pull/8580 |
| |
| [SPARK-10411] [SQL] Move visualization above explain output and hide explain by default |
| zsxwing <zsxwing@gmail.com> |
| 2015-09-02 22:17:39 -0700 |
| Commit: 94404ee, github.com/apache/spark/pull/8570 |
| |
| [SPARK-10379] preserve first page in UnsafeShuffleExternalSorter |
| Davies Liu <davies@databricks.com> |
| 2015-09-02 22:15:54 -0700 |
| Commit: b846a9d, github.com/apache/spark/pull/8543 |
| |
| [SPARK-10422] [SQL] String column in InMemoryColumnarCache needs to override clone method |
| Yin Huai <yhuai@databricks.com> |
| 2015-09-02 21:00:13 -0700 |
| Commit: 2fce5d8, github.com/apache/spark/pull/8578 |
| |
| [SPARK-10392] [SQL] Pyspark - Wrong DateType support on JDBC connection |
| 0x0FFF <programmerag@gmail.com> |
| 2015-09-01 14:58:49 -0700 |
| Commit: 30efa96, github.com/apache/spark/pull/8556 |
| |
| [SPARK-10398] [DOCS] Migrate Spark download page to use new lua mirroring scripts |
| Sean Owen <sowen@cloudera.com> |
| 2015-09-01 20:06:01 +0100 |
| Commit: d19bccd, github.com/apache/spark/pull/8557 |
| |
| Preparing development version 1.5.1-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-31 15:57:49 -0700 |
| Commit: 2b270a1 |
| |
| |
| Release 1.5.0 |
| |
| [SPARK-10143] [SQL] Use parquet's block size (row group size) setting as the min split size if necessary. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-21 14:30:00 -0700 |
| Commit: 14c8c0c, github.com/apache/spark/pull/8346 |
| |
| [SPARK-9864] [DOC] [MLlib] [SQL] Replace since in scaladoc to Since annotation |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-21 14:19:24 -0700 |
| Commit: e7db876, github.com/apache/spark/pull/8352 |
| |
| [SPARK-10122] [PYSPARK] [STREAMING] Fix getOffsetRanges bug in PySpark-Streaming transform function |
| jerryshao <sshao@hortonworks.com> |
| 2015-08-21 13:10:11 -0700 |
| Commit: 4e72839, github.com/apache/spark/pull/8347 |
| |
| [SPARK-10130] [SQL] type coercion for IF should have children resolved first |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-08-21 12:21:51 -0700 |
| Commit: 817c38a, github.com/apache/spark/pull/8331 |
| |
| [SPARK-9846] [DOCS] User guide for Multilayer Perceptron Classifier |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-08-20 20:02:27 -0700 |
| Commit: e5e6017, github.com/apache/spark/pull/8262 |
| |
| [SPARK-10140] [DOC] add target fields to @Since |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-20 20:01:13 -0700 |
| Commit: 04ef52a, github.com/apache/spark/pull/8344 |
| |
| Preparing development version 1.5.1-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 16:24:12 -0700 |
| Commit: 988e838 |
| |
| Preparing Spark release v1.5.0-rc1 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 16:24:07 -0700 |
| Commit: 4c56ad7 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 15:33:10 -0700 |
| Commit: 175c1d9 |
| |
| Preparing Spark release v1.5.0-rc1 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 15:33:04 -0700 |
| Commit: d837d51 |
| |
| [SPARK-9245] [MLLIB] LDA topic assignments |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-20 15:01:31 -0700 |
| Commit: 2beea65, github.com/apache/spark/pull/8329 |
| |
| [SPARK-10108] Add since tags to mllib.feature |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-20 14:56:08 -0700 |
| Commit: 560ec12, github.com/apache/spark/pull/8309 |
| |
| [SPARK-10138] [ML] move setters to MultilayerPerceptronClassifier and add Java test suite |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-20 14:47:04 -0700 |
| Commit: 2e0d2a9, github.com/apache/spark/pull/8342 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 12:43:13 -0700 |
| Commit: eac31ab |
| |
| Preparing Spark release v1.5.0-rc1 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 12:43:08 -0700 |
| Commit: 99eeac8 |
| |
| [SPARK-10126] [PROJECT INFRA] Fix typo in release-build.sh which broke snapshot publishing for Scala 2.11 |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-20 11:31:03 -0700 |
| Commit: 6026f4f, github.com/apache/spark/pull/8325 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 11:06:41 -0700 |
| Commit: a1785e3 |
| |
| Preparing Spark release v1.5.0-rc1 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-20 11:06:31 -0700 |
| Commit: 19b92c8 |
| |
| [SPARK-10136] [SQL] Fixes Parquet support for Avro array of primitive array |
| Cheng Lian <lian@databricks.com> |
| 2015-08-20 11:00:24 -0700 |
| Commit: 2f47e09, github.com/apache/spark/pull/8341 |
| |
| [SPARK-9982] [SPARKR] SparkR DataFrame fail to return data of Decimal type |
| Alex Shkurenko <ashkurenko@enova.com> |
| 2015-08-20 10:16:38 -0700 |
| Commit: a7027e6, github.com/apache/spark/pull/8239 |
| |
| [MINOR] [SQL] Fix sphinx warnings in PySpark SQL |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-20 10:05:31 -0700 |
| Commit: 257e9d7, github.com/apache/spark/pull/8171 |
| |
| [SPARK-10100] [SQL] Eliminate hash table lookup if there is no grouping key in aggregation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-20 07:53:27 -0700 |
| Commit: 5be5175, github.com/apache/spark/pull/8332 |
| |
| [SPARK-10092] [SQL] Backports #8324 to branch-1.5 |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-20 18:43:24 +0800 |
| Commit: 675e224, github.com/apache/spark/pull/8336 |
| |
| [SPARK-10128] [STREAMING] Used correct classloader to deserialize WAL data |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-19 21:15:58 -0700 |
| Commit: 71aa547, github.com/apache/spark/pull/8328 |
| |
| [SPARK-10125] [STREAMING] Fix a potential deadlock in JobGenerator.stop |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-19 19:43:09 -0700 |
| Commit: 63922fa, github.com/apache/spark/pull/8326 |
| |
| [SPARK-10124] [MESOS] Fix removing queued driver in mesos cluster mode. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-08-19 19:43:26 -0700 |
| Commit: a3ed2c3, github.com/apache/spark/pull/8322 |
| |
| [SPARK-9812] [STREAMING] Fix Python 3 compatibility issue in PySpark Streaming and some docs |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-19 18:36:01 -0700 |
| Commit: 16414da, github.com/apache/spark/pull/8315 |
| |
| [SPARK-9242] [SQL] Audit UDAF interface. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-19 17:35:41 -0700 |
| Commit: 321cb99, github.com/apache/spark/pull/8321 |
| |
| [SPARK-9895] User Guide for RFormula Feature Transformer |
| Eric Liang <ekl@databricks.com> |
| 2015-08-19 15:43:08 -0700 |
| Commit: 56a37b0, github.com/apache/spark/pull/8293 |
| |
| [SPARK-6489] [SQL] add column pruning for Generate |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-19 15:04:56 -0700 |
| Commit: 5c749c8, github.com/apache/spark/pull/8268 |
| |
| [SPARK-10119] [CORE] Fix isDynamicAllocationEnabled when config is expliticly disabled. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-19 14:33:32 -0700 |
| Commit: a59475f, github.com/apache/spark/pull/8316 |
| |
| [SPARK-10083] [SQL] CaseWhen should support type coercion of DecimalType and FractionalType |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-08-19 14:31:51 -0700 |
| Commit: 1494d58, github.com/apache/spark/pull/8270 |
| |
| [SPARK-9899] [SQL] Disables customized output committer when speculation is on |
| Cheng Lian <lian@databricks.com> |
| 2015-08-19 14:15:28 -0700 |
| Commit: b32a31d, github.com/apache/spark/pull/8317 |
| |
| [SPARK-10090] [SQL] fix decimal scale of division |
| Davies Liu <davies@databricks.com> |
| 2015-08-19 14:03:47 -0700 |
| Commit: d9dfd43, github.com/apache/spark/pull/8287 |
| |
| [SPARK-9627] [SQL] Stops using Scala runtime reflection in DictionaryEncoding |
| Cheng Lian <lian@databricks.com> |
| 2015-08-19 13:57:52 -0700 |
| Commit: 77269fc, github.com/apache/spark/pull/8306 |
| |
| [SPARK-10073] [SQL] Python withColumn should replace the old column |
| Davies Liu <davies@databricks.com> |
| 2015-08-19 13:56:40 -0700 |
| Commit: afaed7e, github.com/apache/spark/pull/8300 |
| |
| [SPARK-10087] [CORE] [BRANCH-1.5] Disable spark.shuffle.reduceLocality.enabled by default. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-19 13:43:46 -0700 |
| Commit: 829c33a, github.com/apache/spark/pull/8296 |
| |
| [SPARK-10107] [SQL] fix NPE in format_number |
| Davies Liu <davies@databricks.com> |
| 2015-08-19 13:43:04 -0700 |
| Commit: 1038f67, github.com/apache/spark/pull/8305 |
| |
| [SPARK-8918] [MLLIB] [DOC] Add @since tags to mllib.clustering |
| Xiangrui Meng <meng@databricks.com>, Xiaoqing Wang <spark445@126.com>, MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-19 13:17:26 -0700 |
| Commit: 8c0a5a2, github.com/apache/spark/pull/8256 |
| |
| [SPARK-10106] [SPARKR] Add `ifelse` Column function to SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-19 12:39:37 -0700 |
| Commit: ba36925, github.com/apache/spark/pull/8303 |
| |
| [SPARK-10097] Adds `shouldMaximize` flag to `ml.evaluation.Evaluator` |
| Feynman Liang <fliang@databricks.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-19 11:35:05 -0700 |
| Commit: f25c324, github.com/apache/spark/pull/8290 |
| |
| [SPARK-9856] [SPARKR] Add expression functions into SparkR whose params are complicated |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-19 10:41:14 -0700 |
| Commit: a8e8808, github.com/apache/spark/pull/8264 |
| |
| [SPARK-10084] [MLLIB] [DOC] Add Python example for mllib FP-growth user guide |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-19 08:53:34 -0700 |
| Commit: bebe63d, github.com/apache/spark/pull/8279 |
| |
| [SPARK-10060] [ML] [DOC] spark.ml DecisionTree user guide |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-19 07:38:27 -0700 |
| Commit: f8dc427, github.com/apache/spark/pull/8244 |
| |
| [SPARK-8949] Print warnings when using preferred locations feature |
| Han JU <ju.han.felix@gmail.com> |
| 2015-08-19 13:04:16 +0100 |
| Commit: 522b0b6, github.com/apache/spark/pull/7874 |
| |
| [SPARK-9977] [DOCS] Update documentation for StringIndexer |
| lewuathe <lewuathe@me.com> |
| 2015-08-19 09:54:03 +0100 |
| Commit: 5553f02, github.com/apache/spark/pull/8205 |
| |
| [DOCS] [SQL] [PYSPARK] Fix typo in ntile function |
| Moussa Taifi <moutai10@gmail.com> |
| 2015-08-19 09:42:41 +0100 |
| Commit: e56bcc6, github.com/apache/spark/pull/8261 |
| |
| [SPARK-10070] [DOCS] Remove Guava dependencies in user guides |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-19 09:41:09 +0100 |
| Commit: 561390d, github.com/apache/spark/pull/8272 |
| |
| Fix Broken Link |
| Bill Chambers <wchambers@ischool.berkeley.edu> |
| 2015-08-19 00:05:01 -0700 |
| Commit: 417852f, github.com/apache/spark/pull/8302 |
| |
| [SPARK-9967] [SPARK-10099] [STREAMING] Renamed conf spark.streaming.backpressure.{enable-->enabled} and fixed deprecated annotations |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-18 23:37:57 -0700 |
| Commit: 392bd19, github.com/apache/spark/pull/8299 |
| |
| [SPARK-9952] Fix N^2 loop when DAGScheduler.getPreferredLocsInternal accesses cacheLocs |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-18 22:30:13 -0700 |
| Commit: 3ceee55, github.com/apache/spark/pull/8178 |
| |
| [SPARK-9508] GraphX Pregel docs update with new Pregel code |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-08-18 22:13:52 -0700 |
| Commit: 4163926, github.com/apache/spark/pull/7831 |
| |
| [SPARK-9705] [DOC] fix docs about Python version |
| Davies Liu <davies@databricks.com> |
| 2015-08-18 22:11:27 -0700 |
| Commit: 03a8a88, github.com/apache/spark/pull/8245 |
| |
| [SPARK-10093] [SPARK-10096] [SQL] Avoid transformation on executors & fix UDFs on complex types |
| Reynold Xin <rxin@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-08-18 22:08:15 -0700 |
| Commit: 3c33931, github.com/apache/spark/pull/8295 |
| |
| [SPARK-10095] [SQL] use public API of BigInteger |
| Davies Liu <davies@databricks.com> |
| 2015-08-18 20:39:59 -0700 |
| Commit: 11c9335, github.com/apache/spark/pull/8286 |
| |
| [SPARK-10075] [SPARKR] Add `when` expressino function in SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-18 20:27:36 -0700 |
| Commit: ebaeb18, github.com/apache/spark/pull/8266 |
| |
| [SPARK-9939] [SQL] Resorts to Java process API in CliSuite, HiveSparkSubmitSuite and HiveThriftServer2 test suites |
| Cheng Lian <lian@databricks.com> |
| 2015-08-19 11:21:46 +0800 |
| Commit: bb2fb59, github.com/apache/spark/pull/8168 |
| |
| [SPARK-10102] [STREAMING] Fix a race condition that startReceiver may happen before setting trackerState to Started |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-18 20:15:54 -0700 |
| Commit: a6f8979, github.com/apache/spark/pull/8294 |
| |
| [SPARK-10072] [STREAMING] BlockGenerator can deadlock when the queue of generate blocks fills up to capacity |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-18 19:26:38 -0700 |
| Commit: 08c5962, github.com/apache/spark/pull/8257 |
| |
| [SPARKR] [MINOR] Get rid of a long line warning |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-18 19:18:05 -0700 |
| Commit: 0a1385e, github.com/apache/spark/pull/8297 |
| |
| Bump SparkR version string to 1.5.0 |
| Hossein <hossein@databricks.com> |
| 2015-08-18 18:02:22 -0700 |
| Commit: 9b42e24, github.com/apache/spark/pull/8291 |
| |
| [SPARK-8473] [SPARK-9889] [ML] User guide and example code for DCT |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-18 17:54:49 -0700 |
| Commit: 4ee225a, github.com/apache/spark/pull/8184 |
| |
| [SPARK-10098] [STREAMING] [TEST] Cleanup active context after test in FailureSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-18 17:00:13 -0700 |
| Commit: e1b50c7, github.com/apache/spark/pull/8289 |
| |
| [SPARK-10012] [ML] Missing test case for Params#arrayLengthGt |
| lewuathe <lewuathe@me.com> |
| 2015-08-18 15:30:23 -0700 |
| Commit: fb207b2, github.com/apache/spark/pull/8223 |
| |
| [SPARK-8924] [MLLIB, DOCUMENTATION] Added @since tags to mllib.tree |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-08-18 14:58:30 -0700 |
| Commit: 56f4da2, github.com/apache/spark/pull/7380 |
| |
| [SPARK-10088] [SQL] Add support for "stored as avro" in HiveQL parser. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-18 14:45:19 -0700 |
| Commit: 8b0df5a, github.com/apache/spark/pull/8282 |
| |
| [SPARK-10089] [SQL] Add missing golden files. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-18 14:43:05 -0700 |
| Commit: 74a6b1a, github.com/apache/spark/pull/8283 |
| |
| [SPARK-10080] [SQL] Fix binary incompatibility for $ column interpolation |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-18 13:50:51 -0700 |
| Commit: 80a6fb5, github.com/apache/spark/pull/8281 |
| |
| [SPARK-9574] [STREAMING] Remove unnecessary contents of spark-streaming-XXX-assembly jars |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-18 13:35:45 -0700 |
| Commit: 2bccd91, github.com/apache/spark/pull/8069 |
| |
| [SPARK-10085] [MLLIB] [DOCS] removed unnecessary numpy array import |
| Piotr Migdal <pmigdal@gmail.com> |
| 2015-08-18 12:59:28 -0700 |
| Commit: 9bd2e6f, github.com/apache/spark/pull/8284 |
| |
| [SPARK-10032] [PYSPARK] [DOC] Add Python example for mllib LDAModel user guide |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-18 12:56:36 -0700 |
| Commit: ec7079f, github.com/apache/spark/pull/8227 |
| |
| [SPARK-10029] [MLLIB] [DOC] Add Python examples for mllib IsotonicRegression user guide |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-18 12:55:36 -0700 |
| Commit: 80debff, github.com/apache/spark/pull/8225 |
| |
| [SPARK-9900] [MLLIB] User guide for Association Rules |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-18 12:53:57 -0700 |
| Commit: 7ff0e5d, github.com/apache/spark/pull/8207 |
| |
| [SPARK-9028] [ML] Add CountVectorizer as an estimator to generate CountVectorizerModel |
| Yuhao Yang <hhbyyh@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-18 11:00:09 -0700 |
| Commit: b86378c, github.com/apache/spark/pull/7388 |
| |
| [SPARK-10007] [SPARKR] Update `NAMESPACE` file in SparkR for simple parameters functions |
| Yuu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-18 09:10:59 -0700 |
| Commit: 20a760a, github.com/apache/spark/pull/8277 |
| |
| [SPARK-8118] [SQL] Redirects Parquet JUL logger via SLF4J |
| Cheng Lian <lian@databricks.com> |
| 2015-08-18 20:15:33 +0800 |
| Commit: a512250, github.com/apache/spark/pull/8196 |
| |
| [MINOR] fix the comments in IndexShuffleBlockResolver |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-08-18 10:31:11 +0100 |
| Commit: 42a0b48, github.com/apache/spark/pull/8238 |
| |
| [SPARK-10076] [ML] make MultilayerPerceptronClassifier layers and weights public |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-17 23:57:02 -0700 |
| Commit: 40b89c3, github.com/apache/spark/pull/8263 |
| |
| [SPARK-10038] [SQL] fix bug in generated unsafe projection when there is binary in ArrayData |
| Davies Liu <davies@databricks.com> |
| 2015-08-17 23:27:55 -0700 |
| Commit: e5fbe4f, github.com/apache/spark/pull/8250 |
| |
| [MINOR] Format the comment of `translate` at `functions.scala` |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-17 23:27:11 -0700 |
| Commit: 2803e8b, github.com/apache/spark/pull/8265 |
| |
| [SPARK-7808] [ML] add package doc for ml.feature |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-17 19:40:51 -0700 |
| Commit: 3554250, github.com/apache/spark/pull/8260 |
| |
| [SPARK-10059] [YARN] Explicitly add JSP dependencies for tests. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-17 19:35:35 -0700 |
| Commit: bfb4c84, github.com/apache/spark/pull/8251 |
| |
| [SPARK-9902] [MLLIB] Add Java and Python examples to user guide for 1-sample KS test |
| jose.cambronero <jose.cambronero@cloudera.com> |
| 2015-08-17 19:09:45 -0700 |
| Commit: 9740d43, github.com/apache/spark/pull/8154 |
| |
| [SPARK-7707] User guide and example code for KernelDensity |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-08-17 17:57:51 -0700 |
| Commit: 5de0ffb, github.com/apache/spark/pull/8230 |
| |
| [SPARK-9898] [MLLIB] Prefix Span user guide |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-17 17:53:24 -0700 |
| Commit: 18b3d11, github.com/apache/spark/pull/8253 |
| |
| SPARK-8916 [Documentation, MLlib] Add @since tags to mllib.regression |
| Prayag Chandran <prayagchandran@gmail.com> |
| 2015-08-17 17:26:08 -0700 |
| Commit: f5ed9ed, github.com/apache/spark/pull/7518 |
| |
| [SPARK-9768] [PYSPARK] [ML] Add Python API and user guide for ml.feature.ElementwiseProduct |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-17 17:25:41 -0700 |
| Commit: eaeebb9, github.com/apache/spark/pull/8061 |
| |
| [SPARK-9974] [BUILD] [SQL] Makes sure com.twitter:parquet-hadoop-bundle:1.6.0 is in SBT assembly jar |
| Cheng Lian <lian@databricks.com> |
| 2015-08-17 17:25:14 -0700 |
| Commit: 407175e, github.com/apache/spark/pull/8198 |
| |
| [SPARK-8920] [MLLIB] Add @since tags to mllib.linalg |
| Sameer Abhyankar <sabhyankar@sabhyankar-MBP.Samavihome>, Sameer Abhyankar <sabhyankar@sabhyankar-MBP.local> |
| 2015-08-17 16:00:23 -0700 |
| Commit: 0f1417b, github.com/apache/spark/pull/7729 |
| |
| [SPARK-10068] [MLLIB] Adds links to MLlib types, algos, utilities listing |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-17 15:42:14 -0700 |
| Commit: bb3bb2a, github.com/apache/spark/pull/8255 |
| |
| [SPARK-9592] [SQL] Fix Last function implemented based on AggregateExpression1. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-17 15:30:50 -0700 |
| Commit: f77eaaf, github.com/apache/spark/pull/8172 |
| |
| [SPARK-9526] [SQL] Utilize randomized tests to reveal potential bugs in sql expressions |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-17 14:10:19 -0700 |
| Commit: 24765cc, github.com/apache/spark/pull/7855 |
| |
| [SPARK-10036] [SQL] Load JDBC driver in DataFrameReader.jdbc and DataFrameWriter.jdbc |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-17 11:53:33 -0700 |
| Commit: 4daf79f, github.com/apache/spark/pull/8232 |
| |
| [SPARK-9950] [SQL] Wrong Analysis Error for grouping/aggregating on struct fields |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-17 11:36:18 -0700 |
| Commit: 76390ec, github.com/apache/spark/pull/8222 |
| |
| [SPARK-7837] [SQL] Avoids double closing output writers when commitTask() fails |
| Cheng Lian <lian@databricks.com> |
| 2015-08-18 00:59:05 +0800 |
| Commit: 7279445, github.com/apache/spark/pull/8236 |
| |
| [SPARK-9959] [MLLIB] Association Rules Java Compatibility |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-17 09:58:34 -0700 |
| Commit: d554bf4, github.com/apache/spark/pull/8206 |
| |
| [SPARK-9871] [SPARKR] Add expression functions into SparkR which have a variable parameter |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-16 23:33:20 -0700 |
| Commit: 78275c4, github.com/apache/spark/pull/8194 |
| |
| [SPARK-10005] [SQL] Fixes schema merging for nested structs |
| Cheng Lian <lian@databricks.com> |
| 2015-08-16 10:17:58 -0700 |
| Commit: 90245f6, github.com/apache/spark/pull/8228 |
| |
| [SPARK-9973] [SQL] Correct in-memory columnar buffer size |
| Kun Xu <viper_kun@163.com> |
| 2015-08-16 14:44:23 +0800 |
| Commit: e2c6ef8, github.com/apache/spark/pull/8189 |
| |
| [SPARK-10008] Ensure shuffle locality doesn't take precedence over narrow deps |
| Matei Zaharia <matei@databricks.com> |
| 2015-08-16 00:34:58 -0700 |
| Commit: fa55c27, github.com/apache/spark/pull/8220 |
| |
| [SPARK-8844] [SPARKR] head/collect is broken in SparkR. |
| Sun Rui <rui.sun@intel.com> |
| 2015-08-16 00:30:02 -0700 |
| Commit: 4f75ce2, github.com/apache/spark/pull/7419 |
| |
| [SPARK-9805] [MLLIB] [PYTHON] [STREAMING] Added _eventually for ml streaming pyspark tests |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-15 18:48:20 -0700 |
| Commit: 881baf1, github.com/apache/spark/pull/8087 |
| |
| [SPARK-9955] [SQL] correct error message for aggregate |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-15 14:13:12 -0700 |
| Commit: 2fda1d8, github.com/apache/spark/pull/8203 |
| |
| [SPARK-9980] [BUILD] Fix SBT publishLocal error due to invalid characters in doc |
| Herman van Hovell <hvanhovell@questtec.nl> |
| 2015-08-15 10:46:04 +0100 |
| Commit: 1a6f0af, github.com/apache/spark/pull/8209 |
| |
| [SPARK-9725] [SQL] fix serialization of UTF8String across different JVM |
| Davies Liu <davies@databricks.com> |
| 2015-08-14 22:30:35 -0700 |
| Commit: d97af68, github.com/apache/spark/pull/8210 |
| |
| [SPARK-9960] [GRAPHX] sendMessage type fix in LabelPropagation.scala |
| zc he <farseer90718@gmail.com> |
| 2015-08-14 21:28:50 -0700 |
| Commit: 3301500, github.com/apache/spark/pull/8188 |
| |
| [SPARK-9634] [SPARK-9323] [SQL] cleanup unnecessary Aliases in LogicalPlan at the end of analysis |
| Wenchen Fan <cloud0fan@outlook.com>, Michael Armbrust <michael@databricks.com> |
| 2015-08-14 20:59:54 -0700 |
| Commit: 83cbf60, github.com/apache/spark/pull/8215 |
| |
| [HOTFIX] fix duplicated braces |
| Davies Liu <davies@databricks.com> |
| 2015-08-14 20:56:55 -0700 |
| Commit: 3cdeeaf, github.com/apache/spark/pull/8219 |
| |
| [SPARK-9934] Deprecate NIO ConnectionManager. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-14 20:55:32 -0700 |
| Commit: d842917, github.com/apache/spark/pull/8162 |
| |
| [SPARK-9949] [SQL] Fix TakeOrderedAndProject's output. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-14 17:35:17 -0700 |
| Commit: 6be945c, github.com/apache/spark/pull/8179 |
| |
| [SPARK-9968] [STREAMING] Reduced time spent within synchronized block to prevent lock starvation |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-14 15:54:14 -0700 |
| Commit: 8d26247, github.com/apache/spark/pull/8204 |
| |
| [SPARK-9966] [STREAMING] Handle couple of corner cases in PIDRateEstimator |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-14 15:10:01 -0700 |
| Commit: 612b460, github.com/apache/spark/pull/8199 |
| |
| [SPARK-8670] [SQL] Nested columns can't be referenced in pyspark |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-14 14:09:46 -0700 |
| Commit: 5bbb2d3, github.com/apache/spark/pull/8202 |
| |
| [SPARK-9981] [ML] Made labels public for StringIndexerModel |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-14 14:05:03 -0700 |
| Commit: 0f4ccdc, github.com/apache/spark/pull/8211 |
| |
| [SPARK-9978] [PYSPARK] [SQL] fix Window.orderBy and doc of ntile() |
| Davies Liu <davies@databricks.com> |
| 2015-08-14 13:55:29 -0700 |
| Commit: 59cdcc0, github.com/apache/spark/pull/8213 |
| |
| [SPARK-9877] [CORE] Fix StandaloneRestServer NPE when submitting application |
| jerryshao <sshao@hortonworks.com> |
| 2015-08-14 13:44:38 -0700 |
| Commit: 130e06e, github.com/apache/spark/pull/8127 |
| |
| [SPARK-9948] Fix flaky AccumulatorSuite - internal accumulators |
| Andrew Or <andrew@databricks.com> |
| 2015-08-14 13:42:53 -0700 |
| Commit: 1ce0b01, github.com/apache/spark/pull/8176 |
| |
| [SPARK-9809] Task crashes because the internal accumulators are not properly initialized |
| Carson Wang <carson.wang@intel.com> |
| 2015-08-14 13:38:25 -0700 |
| Commit: ff3e956, github.com/apache/spark/pull/8090 |
| |
| [SPARK-9828] [PYSPARK] Mutable values should not be default arguments |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-14 12:46:05 -0700 |
| Commit: d92568a, github.com/apache/spark/pull/8110 |
| |
| [SPARK-9561] Re-enable BroadcastJoinSuite |
| Andrew Or <andrew@databricks.com> |
| 2015-08-14 12:37:21 -0700 |
| Commit: b284213, github.com/apache/spark/pull/8208 |
| |
| [SPARK-9946] [SPARK-9589] [SQL] fix NPE and thread-safety in TaskMemoryManager |
| Davies Liu <davies@databricks.com> |
| 2015-08-14 12:32:35 -0700 |
| Commit: e2a288c, github.com/apache/spark/pull/8177 |
| |
| [SPARK-8744] [ML] Add a public constructor to StringIndexer |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-08-14 11:22:10 -0700 |
| Commit: e4ea239, github.com/apache/spark/pull/7267 |
| |
| [SPARK-9956] [ML] Make trees work with one-category features |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-14 10:48:02 -0700 |
| Commit: f5298da, github.com/apache/spark/pull/8187 |
| |
| [SPARK-9661] [MLLIB] minor clean-up of SPARK-9661 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-14 10:25:11 -0700 |
| Commit: 4aa9238, github.com/apache/spark/pull/8190 |
| |
| [SPARK-9958] [SQL] Make HiveThriftServer2Listener thread-safe and update the tab name to "JDBC/ODBC Server" |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-14 14:41:53 +0800 |
| Commit: a0d52eb, github.com/apache/spark/pull/8185 |
| |
| [MINOR] [SQL] Remove canEqual in Row |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-08-13 22:06:09 -0700 |
| Commit: 00ccb21, github.com/apache/spark/pull/8170 |
| |
| [SPARK-9945] [SQL] pageSize should be calculated from executor.memory |
| Davies Liu <davies@databricks.com> |
| 2015-08-13 21:12:59 -0700 |
| Commit: 703e3f1, github.com/apache/spark/pull/8175 |
| |
| [SPARK-9580] [SQL] Replace singletons in SQL tests |
| Andrew Or <andrew@databricks.com> |
| 2015-08-13 17:42:01 -0700 |
| Commit: 9df2a2d, github.com/apache/spark/pull/8111 |
| |
| [SPARK-9943] [SQL] deserialized UnsafeHashedRelation should be serializable |
| Davies Liu <davies@databricks.com> |
| 2015-08-13 17:35:11 -0700 |
| Commit: b318b11, github.com/apache/spark/pull/8174 |
| |
| [SPARK-8976] [PYSPARK] fix open mode in python3 |
| Davies Liu <davies@databricks.com> |
| 2015-08-13 17:33:37 -0700 |
| Commit: cadc3b7, github.com/apache/spark/pull/8181 |
| |
| [SPARK-9922] [ML] rename StringIndexerReverse to IndexToString |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-13 16:52:17 -0700 |
| Commit: 2b6b1d1, github.com/apache/spark/pull/8152 |
| |
| [SPARK-9942] [PYSPARK] [SQL] ignore exceptions while try to import pandas |
| Davies Liu <davies@databricks.com> |
| 2015-08-13 14:03:55 -0700 |
| Commit: 2c7f8da, github.com/apache/spark/pull/8173 |
| |
| [SPARK-9661] [MLLIB] [ML] Java compatibility |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-13 13:42:35 -0700 |
| Commit: 875ecc7, github.com/apache/spark/pull/8126 |
| |
| [SPARK-9649] Fix MasterSuite, third time's a charm |
| Andrew Or <andrew@databricks.com> |
| 2015-08-13 11:31:10 -0700 |
| Commit: 3046020 |
| |
| [MINOR] [DOC] fix mllib pydoc warnings |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-13 10:16:40 -0700 |
| Commit: 883c7d3, github.com/apache/spark/pull/8169 |
| |
| [MINOR] [ML] change MultilayerPerceptronClassifierModel to MultilayerPerceptronClassificationModel |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-13 09:31:14 -0700 |
| Commit: 2b13532, github.com/apache/spark/pull/8164 |
| |
| [SPARK-8965] [DOCS] Add ml-guide Python Example: Estimator, Transformer, and Param |
| Rosstin <asterazul@gmail.com> |
| 2015-08-13 09:18:39 -0700 |
| Commit: 49085b5, github.com/apache/spark/pull/8081 |
| |
| [SPARK-9073] [ML] spark.ml Models copy() should call setParent when there is a parent |
| lewuathe <lewuathe@me.com>, Lewuathe <lewuathe@me.com> |
| 2015-08-13 09:17:19 -0700 |
| Commit: fe05142, github.com/apache/spark/pull/7447 |
| |
| [SPARK-9757] [SQL] Fixes persistence of Parquet relation with decimal column |
| Yin Huai <yhuai@databricks.com>, Cheng Lian <lian@databricks.com> |
| 2015-08-13 16:16:50 +0800 |
| Commit: 5592d16, github.com/apache/spark/pull/8130 |
| |
| [SPARK-9885] [SQL] Also pass barrierPrefixes and sharedPrefixes to IsolatedClientLoader when hiveMetastoreJars is set to maven. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-13 15:08:57 +0800 |
| Commit: 2a600da, github.com/apache/spark/pull/8158 |
| |
| [SPARK-9918] [MLLIB] remove runs from k-means and rename epsilon to tol |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 23:04:59 -0700 |
| Commit: ae18342, github.com/apache/spark/pull/8148 |
| |
| [SPARK-9914] [ML] define setters explicitly for Java and use setParam group in RFormula |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 22:30:33 -0700 |
| Commit: d213aa7, github.com/apache/spark/pull/8143 |
| |
| [SPARK-9927] [SQL] Revert 8049 since it's pushing wrong filter down |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-13 13:33:39 +0800 |
| Commit: 694e7a3, github.com/apache/spark/pull/8157 |
| |
| [SPARK-8922] [DOCUMENTATION, MLLIB] Add @since tags to mllib.evaluation |
| shikai.tang <tar.sky06@gmail.com> |
| 2015-08-12 21:53:15 -0700 |
| Commit: 6902840, github.com/apache/spark/pull/7429 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-12 21:43:13 -0700 |
| Commit: 8f055e5 |
| |
| Preparing Spark release v1.5.0-preview-20150812 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-12 21:42:59 -0700 |
| Commit: cedce9b |
| |
| [SPARK-9917] [ML] add getMin/getMax and doc for originalMin/origianlMax in MinMaxScaler |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 21:33:38 -0700 |
| Commit: 16f4bf4, github.com/apache/spark/pull/8145 |
| |
| [SPARK-9832] [SQL] add a thread-safe lookup for BytesToBytseMap |
| Davies Liu <davies@databricks.com> |
| 2015-08-12 21:26:00 -0700 |
| Commit: 8229437, github.com/apache/spark/pull/8151 |
| |
| [SPARK-9920] [SQL] The simpleString of TungstenAggregate does not show its output |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-12 21:24:15 -0700 |
| Commit: 3b1b8ea, github.com/apache/spark/pull/8150 |
| |
| [SPARK-9916] [BUILD] [SPARKR] removed left-over sparkr.zip copy/create commands from codebase |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-08-12 20:59:38 -0700 |
| Commit: 3d1b9f0, github.com/apache/spark/pull/8147 |
| |
| [SPARK-9903] [MLLIB] skip local processing in PrefixSpan if there are no small prefixes |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 20:44:40 -0700 |
| Commit: af470a7, github.com/apache/spark/pull/8136 |
| |
| [SPARK-9704] [ML] Made ProbabilisticClassifier, Identifiable, VectorUDT public APIs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-12 20:43:36 -0700 |
| Commit: a06860c, github.com/apache/spark/pull/8004 |
| |
| [SPARK-9199] [CORE] Update Tachyon dependency from 0.7.0 -> 0.7.1. |
| Calvin Jia <jia.calvin@gmail.com> |
| 2015-08-12 20:07:37 -0700 |
| Commit: c182dc4, github.com/apache/spark/pull/8135 |
| |
| [SPARK-9908] [SQL] When spark.sql.tungsten.enabled is false, broadcast join does not work |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-12 20:03:55 -0700 |
| Commit: 71ea61f, github.com/apache/spark/pull/8149 |
| |
| [SPARK-9827] [SQL] fix fd leak in UnsafeRowSerializer |
| Davies Liu <davies@databricks.com> |
| 2015-08-12 20:02:55 -0700 |
| Commit: eebb3f9, github.com/apache/spark/pull/8116 |
| |
| [SPARK-9870] Disable driver UI and Master REST server in SparkSubmitSuite |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-12 18:52:11 -0700 |
| Commit: 4b547b9, github.com/apache/spark/pull/8124 |
| |
| [SPARK-9855] [SPARKR] Add expression functions into SparkR whose params are simple |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-12 18:33:27 -0700 |
| Commit: ca39c9e, github.com/apache/spark/pull/8123 |
| |
| [SPARK-9780] [STREAMING] [KAFKA] prevent NPE if KafkaRDD instantiation ā¦ |
| cody koeninger <cody@koeninger.org> |
| 2015-08-12 17:44:16 -0700 |
| Commit: 62ab2a4, github.com/apache/spark/pull/8133 |
| |
| [SPARK-9449] [SQL] Include MetastoreRelation's inputFiles |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-12 17:07:29 -0700 |
| Commit: 3298fb6, github.com/apache/spark/pull/8119 |
| |
| [SPARK-9915] [ML] stopWords should use StringArrayParam |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 17:06:12 -0700 |
| Commit: ed73f54, github.com/apache/spark/pull/8141 |
| |
| [SPARK-9912] [MLLIB] QRDecomposition should use QType and RType for type names instead of UType and VType |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 17:04:31 -0700 |
| Commit: 31b7fdc, github.com/apache/spark/pull/8140 |
| |
| [SPARK-9909] [ML] [TRIVIAL] move weightCol to shared params |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-08-12 16:54:45 -0700 |
| Commit: 2f8793b, github.com/apache/spark/pull/8144 |
| |
| [SPARK-9913] [MLLIB] LDAUtils should be private |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 16:53:47 -0700 |
| Commit: 6aca0cf, github.com/apache/spark/pull/8142 |
| |
| [SPARK-9894] [SQL] Json writer should handle MapData. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-12 16:45:15 -0700 |
| Commit: 08f767a, github.com/apache/spark/pull/8137 |
| |
| [SPARK-9826] [CORE] Fix cannot use custom classes in log4j.properties |
| michellemay <mlemay@gmail.com> |
| 2015-08-12 16:17:58 -0700 |
| Commit: 74c9dce, github.com/apache/spark/pull/8109 |
| |
| [SPARK-9092] Fixed incompatibility when both num-executors and dynamic... |
| Niranjan Padmanabhan <niranjan.padmanabhan@cloudera.com> |
| 2015-08-12 16:10:21 -0700 |
| Commit: 8537e51, github.com/apache/spark/pull/7657 |
| |
| [SPARK-9907] [SQL] Python crc32 is mistakenly calling md5 |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-12 15:27:52 -0700 |
| Commit: b28295f, github.com/apache/spark/pull/8138 |
| |
| [SPARK-8967] [DOC] add Since annotation |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-12 14:28:23 -0700 |
| Commit: 6a7582e, github.com/apache/spark/pull/8131 |
| |
| [SPARK-9789] [ML] Added logreg threshold param back |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-12 14:27:13 -0700 |
| Commit: bdf8dc1, github.com/apache/spark/pull/8079 |
| |
| [SPARK-9766] [ML] [PySpark] check and add miss docs for PySpark ML |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-12 13:24:18 -0700 |
| Commit: 65b5b21, github.com/apache/spark/pull/8059 |
| |
| [SPARK-9726] [PYTHON] PySpark DF join no longer accepts on=None |
| Brennan Ashton <bashton@brennanashton.com> |
| 2015-08-12 11:57:30 -0700 |
| Commit: 8629c33, github.com/apache/spark/pull/8016 |
| |
| [SPARK-9847] [ML] Modified copyValues to distinguish between default, explicit param values |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-12 10:48:52 -0700 |
| Commit: b515f89, github.com/apache/spark/pull/8115 |
| |
| [SPARK-9804] [HIVE] Use correct value for isSrcLocal parameter. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-12 10:38:30 -0700 |
| Commit: e9641f1, github.com/apache/spark/pull/8086 |
| |
| [SPARK-9747] [SQL] Avoid starving an unsafe operator in aggregation |
| Andrew Or <andrew@databricks.com> |
| 2015-08-12 10:08:35 -0700 |
| Commit: 4c6b129, github.com/apache/spark/pull/8038 |
| |
| [SPARK-7583] [MLLIB] User guide update for RegexTokenizer |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-08-12 09:35:32 -0700 |
| Commit: 2d86fad, github.com/apache/spark/pull/7828 |
| |
| [SPARK-9795] Dynamic allocation: avoid double counting when killing same executor twice |
| Andrew Or <andrew@databricks.com> |
| 2015-08-12 09:24:50 -0700 |
| Commit: bc4ac65, github.com/apache/spark/pull/8078 |
| |
| [SPARK-8625] [CORE] Propagate user exceptions in tasks back to driver |
| Tom White <tom@cloudera.com> |
| 2015-08-12 10:06:27 -0500 |
| Commit: 0579f28, github.com/apache/spark/pull/7014 |
| |
| [SPARK-9407] [SQL] Relaxes Parquet ValidTypeMap to allow ENUM predicates to be pushed down |
| Cheng Lian <lian@databricks.com> |
| 2015-08-12 20:01:34 +0800 |
| Commit: 5e6fdc6, github.com/apache/spark/pull/8107 |
| |
| [SPARK-9182] [SQL] Filters are not passed through to jdbc source |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-12 19:54:00 +0800 |
| Commit: 8e32db9, github.com/apache/spark/pull/8049 |
| |
| [SPARK-9575] [MESOS] Add docuemntation around Mesos shuffle service. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-08-11 23:33:22 -0700 |
| Commit: 5dd0c5c, github.com/apache/spark/pull/7907 |
| |
| [SPARK-8798] [MESOS] Allow additional uris to be fetched with mesos |
| Timothy Chen <tnachen@gmail.com> |
| 2015-08-11 23:26:33 -0700 |
| Commit: a2f8057, github.com/apache/spark/pull/7195 |
| |
| [SPARK-9426] [WEBUI] Job page DAG visualization is not shown |
| Carson Wang <carson.wang@intel.com> |
| 2015-08-11 23:25:02 -0700 |
| Commit: 93fc959, github.com/apache/spark/pull/8104 |
| |
| [SPARK-9829] [WEBUI] Display the update value for peak execution memory |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-11 23:23:17 -0700 |
| Commit: d9d4bde, github.com/apache/spark/pull/8121 |
| |
| [SPARK-9806] [WEB UI] Don't share ReplayListenerBus between multiple applications |
| Rohit Agarwal <rohita@qubole.com> |
| 2015-08-11 23:20:39 -0700 |
| Commit: 402c0ca, github.com/apache/spark/pull/8088 |
| |
| [SPARK-8366] maxNumExecutorsNeeded should properly handle failed tasks |
| xutingjun <xutingjun@huawei.com>, meiyoula <1039320815@qq.com> |
| 2015-08-11 23:19:35 -0700 |
| Commit: 2f90918, github.com/apache/spark/pull/6817 |
| |
| [SPARK-9854] [SQL] RuleExecutor.timeMap should be thread-safe |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-11 22:46:59 -0700 |
| Commit: b994f89, github.com/apache/spark/pull/8120 |
| |
| [SPARK-9831] [SQL] fix serialization with empty broadcast |
| Davies Liu <davies@databricks.com> |
| 2015-08-11 22:45:18 -0700 |
| Commit: 7024f3e, github.com/apache/spark/pull/8117 |
| |
| [SPARK-9713] [ML] Document SparkR MLlib glm() integration in Spark 1.5 |
| Eric Liang <ekl@databricks.com> |
| 2015-08-11 21:26:03 -0700 |
| Commit: 890c75b, github.com/apache/spark/pull/8085 |
| |
| [SPARK-1517] Refactor release scripts to facilitate nightly publishing |
| Patrick Wendell <patrick@databricks.com> |
| 2015-08-11 21:16:48 -0700 |
| Commit: 6ea33f5, github.com/apache/spark/pull/7411 |
| |
| [SPARK-9649] Fix flaky test MasterSuite again - disable REST |
| Andrew Or <andrew@databricks.com> |
| 2015-08-11 20:46:58 -0700 |
| Commit: 0119edf, github.com/apache/spark/pull/8084 |
| |
| [SPARK-9849] [SQL] DirectParquetOutputCommitter qualified name should be backward compatible |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-11 18:08:49 -0700 |
| Commit: ec7a4b9, github.com/apache/spark/pull/8114 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-11 18:07:34 -0700 |
| Commit: b7497e3 |
| |
| Preparing Spark release v1.5.0-snapshot-20150811 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-11 18:07:22 -0700 |
| Commit: 158b2ea |
| |
| [SPARK-9074] [LAUNCHER] Allow arbitrary Spark args to be set. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-11 16:33:08 -0700 |
| Commit: 18d78a8, github.com/apache/spark/pull/7975 |
| |
| [HOTFIX] Fix style error caused by ef961ed48a4f45447f0e0ad256b040c7ab2d78d9 |
| Andrew Or <andrew@databricks.com> |
| 2015-08-11 14:52:52 -0700 |
| Commit: 1067c73 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-11 14:32:43 -0700 |
| Commit: 725e5c7 |
| |
| Preparing Spark release v1.5.0-snapshot-20150811 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-11 14:32:37 -0700 |
| Commit: e9329ef |
| |
| [SPARK-8925] [MLLIB] Add @since tags to mllib.util |
| Sudhakar Thota <sudhakarthota@yahoo.com>, Sudhakar Thota <sudhakarthota@sudhakars-mbp-2.usca.ibm.com> |
| 2015-08-11 14:31:51 -0700 |
| Commit: ef961ed, github.com/apache/spark/pull/7436 |
| |
| [SPARK-9788] [MLLIB] Fix LDA Binary Compatibility |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-11 14:21:53 -0700 |
| Commit: 2273e74, github.com/apache/spark/pull/8077 |
| |
| [SPARK-9824] [CORE] Fix the issue that InternalAccumulator leaks WeakReference |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-11 14:06:23 -0700 |
| Commit: cdf781d, github.com/apache/spark/pull/8108 |
| |
| [SPARK-9814] [SQL] EqualNotNull not passing to data sources |
| hyukjinkwon <gurwls223@gmail.com>, ź¶ķģ§ <gurwls223@gmail.com> |
| 2015-08-11 14:04:09 -0700 |
| Commit: eead87e, github.com/apache/spark/pull/8096 |
| |
| [SPARK-7726] Add import so Scaladoc doesn't fail. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-08-11 14:02:23 -0700 |
| Commit: e9d1eab, github.com/apache/spark/pull/8095 |
| |
| [SPARK-9750] [MLLIB] Improve equals on SparseMatrix and DenseMatrix |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-11 12:49:47 -0700 |
| Commit: 811d23f, github.com/apache/spark/pull/8042 |
| |
| [SPARK-9646] [SQL] Add metrics for all join and aggregate operators |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-11 12:39:13 -0700 |
| Commit: 767ee18, github.com/apache/spark/pull/8060 |
| |
| [SPARK-9572] [STREAMING] [PYSPARK] Added StreamingContext.getActiveOrCreate() in Python |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-11 12:02:28 -0700 |
| Commit: 71460b8, github.com/apache/spark/pull/8080 |
| |
| Fix comment error |
| Jeff Zhang <zjffdu@apache.org> |
| 2015-08-11 10:42:17 -0700 |
| Commit: b077f36, github.com/apache/spark/pull/8097 |
| |
| [SPARK-9785] [SQL] HashPartitioning compatibility should consider expression ordering |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-11 08:52:15 -0700 |
| Commit: efcae3a, github.com/apache/spark/pull/8074 |
| |
| [SPARK-9815] Rename PlatformDependent.UNSAFE -> Platform. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-11 08:41:06 -0700 |
| Commit: 84ba990, github.com/apache/spark/pull/8094 |
| |
| [SPARK-9727] [STREAMING] [BUILD] Updated streaming kinesis SBT project name to be more consistent |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-11 02:41:03 -0700 |
| Commit: ebbd3b6, github.com/apache/spark/pull/8092 |
| |
| [SPARK-9640] [STREAMING] [TEST] Do not run Python Kinesis tests when the Kinesis assembly JAR has not been generated |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-10 23:41:53 -0700 |
| Commit: c7f0090, github.com/apache/spark/pull/7961 |
| |
| [SPARK-9729] [SPARK-9363] [SQL] Use sort merge join for left and right outer join |
| Josh Rosen <joshrosen@databricks.com>, Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-08-10 22:04:41 -0700 |
| Commit: f9beef9, github.com/apache/spark/pull/5717. |
| |
| [SPARK-9340] [SQL] Fixes converting unannotated Parquet lists |
| Cheng Lian <lian@databricks.com> |
| 2015-08-11 12:46:33 +0800 |
| Commit: 01efa4f, github.com/apache/spark/pull/8070 |
| |
| [SPARK-9801] [STREAMING] Check if file exists before deleting temporary files. |
| Hao Zhu <viadeazhu@gmail.com> |
| 2015-08-10 17:17:22 -0700 |
| Commit: 94692bb, github.com/apache/spark/pull/8082 |
| |
| [SPARK-5155] [PYSPARK] [STREAMING] Mqtt streaming support in Python |
| Prabeesh K <prabsmails@gmail.com>, zsxwing <zsxwing@gmail.com>, prabs <prabsmails@gmail.com>, Prabeesh K <prabeesh.k@namshi.com> |
| 2015-08-10 16:33:23 -0700 |
| Commit: 8f4014f, github.com/apache/spark/pull/4229 |
| |
| [SPARK-9737] [YARN] Add the suggested configuration when required executor memory is above the max threshold of this cluster on YARN mode |
| Yadong Qi <qiyadong2010@gmail.com> |
| 2015-08-09 19:54:05 +0100 |
| Commit: 51406be, github.com/apache/spark/pull/8028 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-10 14:26:56 -0700 |
| Commit: 0e4f58e |
| |
| Preparing Spark release v1.5.0-snapshot-20150810 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-10 14:26:49 -0700 |
| Commit: 3369ad9 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-10 13:56:56 -0700 |
| Commit: e51779c |
| |
| Preparing Spark release v1.5.0-snapshot-20150810 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-10 13:56:50 -0700 |
| Commit: 2203149 |
| |
| [SPARK-9759] [SQL] improve decimal.times() and cast(int, decimalType) |
| Davies Liu <davies@databricks.com> |
| 2015-08-10 13:55:11 -0700 |
| Commit: d17303a, github.com/apache/spark/pull/8052 |
| |
| [SPARK-9620] [SQL] generated UnsafeProjection should support many columns or large exressions |
| Davies Liu <davies@databricks.com> |
| 2015-08-10 13:52:18 -0700 |
| Commit: 2384248, github.com/apache/spark/pull/8044 |
| |
| [SPARK-9763][SQL] Minimize exposure of internal SQL classes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-10 13:49:23 -0700 |
| Commit: c1838e4, github.com/apache/spark/pull/8056 |
| |
| [SPARK-9784] [SQL] Exchange.isUnsafe should check whether codegen and unsafe are enabled |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-10 13:05:03 -0700 |
| Commit: d251d9f, github.com/apache/spark/pull/8073 |
| |
| Fixed AtmoicReference<> Example |
| Mahmoud Lababidi <lababidi@gmail.com> |
| 2015-08-10 13:02:01 -0700 |
| Commit: 39493b2, github.com/apache/spark/pull/8076 |
| |
| [SPARK-9755] [MLLIB] Add docs to MultivariateOnlineSummarizer methods |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-10 11:01:45 -0700 |
| Commit: 3ee2c8d, github.com/apache/spark/pull/8045 |
| |
| [SPARK-9743] [SQL] Fixes JSONRelation refreshing |
| Cheng Lian <lian@databricks.com> |
| 2015-08-10 09:07:08 -0700 |
| Commit: 94b2f5b, github.com/apache/spark/pull/8035 |
| |
| [SPARK-9777] [SQL] Window operator can accept UnsafeRows |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-09 22:33:53 -0700 |
| Commit: f75c64b, github.com/apache/spark/pull/8064 |
| |
| [CORE] [SPARK-9760] Use Option instead of Some for Ivy repos |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-08-09 14:30:30 -0700 |
| Commit: 0e0471d, github.com/apache/spark/pull/8055 |
| |
| [SPARK-9703] [SQL] Refactor EnsureRequirements to avoid certain unnecessary shuffles |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-09 14:26:01 -0700 |
| Commit: 323d686, github.com/apache/spark/pull/7988 |
| |
| [SPARK-8930] [SQL] Throw a AnalysisException with meaningful messages if DataFrame#explode takes a star in expressions |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-09 11:44:51 -0700 |
| Commit: 1ce5061, github.com/apache/spark/pull/8057 |
| |
| [SPARK-9752][SQL] Support UnsafeRow in Sample operator. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-09 10:58:36 -0700 |
| Commit: b12f073, github.com/apache/spark/pull/8040 |
| |
| [SPARK-6212] [SQL] The EXPLAIN output of CTAS only shows the analyzed plan |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-08 21:05:50 -0700 |
| Commit: 251d1ee, github.com/apache/spark/pull/7986 |
| |
| [MINOR] inaccurate comments for showString() |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-08-08 18:22:46 -0700 |
| Commit: 874b9d8, github.com/apache/spark/pull/8050 |
| |
| [SPARK-9486][SQL] Add data source aliasing for external packages |
| Joseph Batchik <joseph.batchik@cloudera.com>, Joseph Batchik <josephbatchik@gmail.com> |
| 2015-08-08 11:03:01 -0700 |
| Commit: 06b6234, github.com/apache/spark/pull/7802 |
| |
| [SPARK-9728][SQL]Support CalendarIntervalType in HiveQL |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-08 11:01:25 -0700 |
| Commit: 3c438c7, github.com/apache/spark/pull/8034 |
| |
| [SPARK-6902] [SQL] [PYSPARK] Row should be read-only |
| Davies Liu <davies@databricks.com> |
| 2015-08-08 08:38:18 -0700 |
| Commit: 3427f57, github.com/apache/spark/pull/8009 |
| |
| [SPARK-4561] [PYSPARK] [SQL] turn Row into dict recursively |
| Davies Liu <davies@databricks.com> |
| 2015-08-08 08:36:14 -0700 |
| Commit: aaa475c, github.com/apache/spark/pull/8006 |
| |
| [SPARK-9738] [SQL] remove FromUnsafe and add its codegen version to GenerateSafe |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-08 08:33:14 -0700 |
| Commit: 3ed219f, github.com/apache/spark/pull/8029 |
| |
| [SPARK-4176] [SQL] [MINOR] Should use unscaled Long to write decimals for precision <= 18 rather than 8 |
| Cheng Lian <lian@databricks.com> |
| 2015-08-08 18:09:48 +0800 |
| Commit: 2cd9632, github.com/apache/spark/pull/8031 |
| |
| [SPARK-9731] Standalone scheduling incorrect cores if spark.executor.cores is not set |
| Carson Wang <carson.wang@intel.com> |
| 2015-08-07 23:36:26 -0700 |
| Commit: 2ad75d9, github.com/apache/spark/pull/8017 |
| |
| [SPARK-9753] [SQL] TungstenAggregate should also accept InternalRow instead of just UnsafeRow |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-07 20:04:17 -0700 |
| Commit: 47e4735, github.com/apache/spark/pull/8041 |
| |
| [SPARK-9754][SQL] Remove TypeCheck in debug package. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-07 19:09:28 -0700 |
| Commit: 5598b62, github.com/apache/spark/pull/8043 |
| |
| [SPARK-9719] [ML] Clean up Naive Bayes doc |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-07 17:21:12 -0700 |
| Commit: c5d43d6, github.com/apache/spark/pull/8047 |
| |
| [SPARK-9756] [ML] Make constructors in ML decision trees private |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-07 17:19:48 -0700 |
| Commit: 2a179a9, github.com/apache/spark/pull/8046 |
| |
| [SPARK-8890] [SQL] Fallback on sorting when writing many dynamic partitions |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-07 16:24:50 -0700 |
| Commit: ea4dfb9, github.com/apache/spark/pull/8010 |
| |
| [SPARK-8481] [MLLIB] GaussianMixtureModel predict accepting single vector |
| Dariusz Kobylarz <darek.kobylarz@gmail.com> |
| 2015-08-07 14:51:03 -0700 |
| Commit: 2952660, github.com/apache/spark/pull/8039 |
| |
| [SPARK-9674] Re-enable ignored test in SQLQuerySuite |
| Andrew Or <andrew@databricks.com> |
| 2015-08-07 14:20:13 -0700 |
| Commit: 5471202, github.com/apache/spark/pull/8015 |
| |
| [SPARK-9733][SQL] Improve physical plan explain for data sources |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-07 13:41:45 -0700 |
| Commit: d13b5c8, github.com/apache/spark/pull/8024 |
| |
| [SPARK-9667][SQL] followup: Use GenerateUnsafeProjection.canSupport to test Exchange supported data types. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-07 13:26:03 -0700 |
| Commit: 1b0f784, github.com/apache/spark/pull/8036 |
| |
| [SPARK-9736] [SQL] JoinedRow.anyNull should delegate to the underlying rows. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-07 11:29:13 -0700 |
| Commit: 70bf170, github.com/apache/spark/pull/8027 |
| |
| [SPARK-8382] [SQL] Improve Analysis Unit test framework |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-07 11:28:43 -0700 |
| Commit: ff0abca, github.com/apache/spark/pull/8025 |
| |
| [SPARK-9674][SPARK-9667] Remove SparkSqlSerializer2 |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-07 11:02:53 -0700 |
| Commit: 6c2f30c, github.com/apache/spark/pull/7981 |
| |
| [SPARK-9467][SQL]Add SQLMetric to specialize accumulators to avoid boxing |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-07 00:09:58 -0700 |
| Commit: 7a6f950, github.com/apache/spark/pull/7996 |
| |
| [SPARK-9683] [SQL] copy UTF8String when convert unsafe array/map to safe |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-07 00:00:43 -0700 |
| Commit: 064ba90, github.com/apache/spark/pull/7990 |
| |
| [SPARK-9453] [SQL] support records larger than page size in UnsafeShuffleExternalSorter |
| Davies Liu <davies@databricks.com> |
| 2015-08-06 23:40:38 -0700 |
| Commit: 8ece4cc, github.com/apache/spark/pull/8005 |
| |
| [SPARK-9700] Pick default page size more intelligently. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-06 23:18:29 -0700 |
| Commit: 0e439c2, github.com/apache/spark/pull/8012 |
| |
| [SPARK-8862][SQL]Support multiple SQLContexts in Web UI |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-06 22:52:23 -0700 |
| Commit: c34fdaf, github.com/apache/spark/pull/7962 |
| |
| [SPARK-7550] [SQL] [MINOR] Fixes logs when persisting DataFrames |
| Cheng Lian <lian@databricks.com> |
| 2015-08-06 22:49:01 -0700 |
| Commit: aedc8f3, github.com/apache/spark/pull/8021 |
| |
| [SPARK-8057][Core]Call TaskAttemptContext.getTaskAttemptID using Reflection |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-06 21:42:42 -0700 |
| Commit: e902c4f, github.com/apache/spark/pull/6599 |
| |
| Fix doc typo |
| Jeff Zhang <zjffdu@apache.org> |
| 2015-08-06 21:03:47 -0700 |
| Commit: 5491dfb, github.com/apache/spark/pull/8019 |
| |
| [SPARK-9709] [SQL] Avoid starving unsafe operators that use sort |
| Andrew Or <andrew@databricks.com> |
| 2015-08-06 19:04:57 -0700 |
| Commit: 472f0dc, github.com/apache/spark/pull/8011 |
| |
| [SPARK-9692] Remove SqlNewHadoopRDD's generated Tuple2 and InterruptibleIterator. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-06 18:25:38 -0700 |
| Commit: 37b6403, github.com/apache/spark/pull/8000 |
| |
| [SPARK-9650][SQL] Fix quoting behavior on interpolated column names |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-06 17:31:16 -0700 |
| Commit: 9be9d38, github.com/apache/spark/pull/7969 |
| |
| [SPARK-9228] [SQL] use tungsten.enabled in public for both of codegen/unsafe |
| Davies Liu <davies@databricks.com> |
| 2015-08-06 17:30:31 -0700 |
| Commit: b4feccf, github.com/apache/spark/pull/7998 |
| |
| [SPARK-9691] [SQL] PySpark SQL rand function treats seed 0 as no seed |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-06 17:03:14 -0700 |
| Commit: 75b4e5a, github.com/apache/spark/pull/7999 |
| |
| [SPARK-9633] [BUILD] SBT download locations outdated; need an update |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-06 23:43:52 +0100 |
| Commit: 985e454, github.com/apache/spark/pull/7956 |
| |
| [SPARK-9645] [YARN] [CORE] Allow shuffle service to read shuffle files. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-06 15:30:27 -0700 |
| Commit: d0a648c, github.com/apache/spark/pull/7966 |
| |
| [SPARK-9630] [SQL] Clean up new aggregate operators (SPARK-9240 follow up) |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-06 15:04:44 -0700 |
| Commit: 272e883, github.com/apache/spark/pull/7954 |
| |
| [SPARK-9639] [STREAMING] Fix a potential NPE in Streaming JobScheduler |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-06 14:39:36 -0700 |
| Commit: 9806872, github.com/apache/spark/pull/7960 |
| |
| [DOCS] [STREAMING] make the existing parameter docs for OffsetRange acā¦ |
| cody koeninger <cody@koeninger.org> |
| 2015-08-06 14:37:25 -0700 |
| Commit: 8ecfb05, github.com/apache/spark/pull/7995 |
| |
| [SPARK-9556] [SPARK-9619] [SPARK-9624] [STREAMING] Make BlockGenerator more robust and make all BlockGenerators subscribe to rate limit updates |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-06 14:35:30 -0700 |
| Commit: 3997dd3, github.com/apache/spark/pull/7913 |
| |
| [SPARK-9548][SQL] Add a destructive iterator for BytesToBytesMap |
| Liang-Chi Hsieh <viirya@appier.com>, Reynold Xin <rxin@databricks.com> |
| 2015-08-06 14:33:29 -0700 |
| Commit: 3137628, github.com/apache/spark/pull/7924. |
| |
| [SPARK-9211] [SQL] [TEST] normalize line separators before generating MD5 hash |
| Christian Kadner <ckadner@us.ibm.com> |
| 2015-08-06 14:15:42 -0700 |
| Commit: 990b4bf, github.com/apache/spark/pull/7563 |
| |
| [SPARK-9493] [ML] add featureIndex to handle vector features in IsotonicRegression |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-06 13:29:31 -0700 |
| Commit: ee43d35, github.com/apache/spark/pull/7952 |
| |
| [SPARK-6923] [SPARK-7550] [SQL] Persists data source relations in Hive compatible format when possible |
| Cheng Lian <lian@databricks.com>, Cheng Hao <hao.cheng@intel.com> |
| 2015-08-06 11:13:44 +0800 |
| Commit: 92e8acc, github.com/apache/spark/pull/7967 |
| |
| [SPARK-9381] [SQL] Migrate JSON data source to the new partitioning data source |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-08-05 22:35:55 +0800 |
| Commit: 3d24767, github.com/apache/spark/pull/7696 |
| |
| [SPARK-9618] [SQL] Use the specified schema when reading Parquet files |
| Nathan Howell <nhowell@godaddy.com> |
| 2015-08-05 22:16:56 +0800 |
| Commit: d5f7881, github.com/apache/spark/pull/7947 |
| |
| [SPARK-8978] [STREAMING] Implements the DirectKafkaRateController |
| Dean Wampler <dean@concurrentthought.com>, Nilanjan Raychaudhuri <nraychaudhuri@gmail.com>, FrancĢ§ois Garillot <francois@garillot.net> |
| 2015-08-06 12:50:08 -0700 |
| Commit: 8b00c06, github.com/apache/spark/pull/7796 |
| |
| [SPARK-9641] [DOCS] spark.shuffle.service.port is not documented |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-06 19:29:42 +0100 |
| Commit: 8a79562, github.com/apache/spark/pull/7991 |
| |
| [SPARK-9632] [SQL] [HOT-FIX] Fix build. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-06 11:15:54 -0700 |
| Commit: b51159d, github.com/apache/spark/pull/8001 |
| |
| [SPARK-9632][SQL] update InternalRow.toSeq to make it accept data type info |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-06 10:40:54 -0700 |
| Commit: 2382b48, github.com/apache/spark/pull/7955 |
| |
| [SPARK-9659][SQL] Rename inSet to isin to match Pandas function. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-06 10:39:16 -0700 |
| Commit: 6b8d2d7, github.com/apache/spark/pull/7977 |
| |
| [SPARK-9615] [SPARK-9616] [SQL] [MLLIB] Bugs related to FrequentItems when merging and with Tungsten |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-08-06 10:29:40 -0700 |
| Commit: 78f168e, github.com/apache/spark/pull/7945 |
| |
| [SPARK-9533] [PYSPARK] [ML] Add missing methods in Word2Vec ML |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-06 10:09:58 -0700 |
| Commit: e24b976, github.com/apache/spark/pull/7930 |
| |
| [SPARK-9112] [ML] Implement Stats for LogisticRegression |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-06 10:08:33 -0700 |
| Commit: 70b9ed1, github.com/apache/spark/pull/7538 |
| |
| [SPARK-9593] [SQL] [HOTFIX] Makes the Hadoop shims loading fix more robust |
| Cheng Lian <lian@databricks.com> |
| 2015-08-06 09:53:53 -0700 |
| Commit: cc4c569, github.com/apache/spark/pull/7994 |
| |
| [SPARK-9593] [SQL] Fixes Hadoop shims loading |
| Cheng Lian <lian@databricks.com> |
| 2015-08-05 20:03:54 +0800 |
| Commit: 11c28a5, github.com/apache/spark/pull/7929 |
| |
| [SPARK-9482] [SQL] Fix thread-safey issue of using UnsafeProjection in join |
| Davies Liu <davies@databricks.com> |
| 2015-08-06 09:12:41 -0700 |
| Commit: c39d5d1, github.com/apache/spark/pull/7940 |
| |
| [SPARK-9644] [SQL] Support update DecimalType with precision > 18 in UnsafeRow |
| Davies Liu <davies@databricks.com> |
| 2015-08-06 09:10:57 -0700 |
| Commit: 43b30bc, github.com/apache/spark/pull/7978 |
| |
| [SPARK-8266] [SQL] add function translate |
| zhichao.li <zhichao.li@intel.com> |
| 2015-08-06 09:02:30 -0700 |
| Commit: cab86c4, github.com/apache/spark/pull/7709 |
| |
| [SPARK-9664] [SQL] Remove UDAFRegistration and add apply to UserDefinedAggregateFunction. |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-05 21:50:35 -0700 |
| Commit: 29ace3b, github.com/apache/spark/pull/7982 |
| |
| [SPARK-9674][SQL] Remove GeneratedAggregate. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-05 21:50:14 -0700 |
| Commit: 252eb61, github.com/apache/spark/pull/7983 |
| |
| [SPARK-9611] [SQL] Fixes a few corner cases when we spill a UnsafeFixedWidthAggregationMap |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-05 19:19:09 -0700 |
| Commit: f24cd8c, github.com/apache/spark/pull/7948 |
| |
| [SPARK-9651] Fix UnsafeExternalSorterSuite. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-05 17:58:36 -0700 |
| Commit: eb2229a, github.com/apache/spark/pull/7970 |
| |
| [SPARK-6591] [SQL] Python data source load options should auto convert common types into strings |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-05 17:28:23 -0700 |
| Commit: 5f037b3, github.com/apache/spark/pull/7926 |
| |
| [SPARK-5895] [ML] Add VectorSlicer - updated |
| Xusen Yin <yinxusen@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-05 17:07:55 -0700 |
| Commit: 3b617e8, github.com/apache/spark/pull/7972 |
| |
| [SPARK-9054] [SQL] Rename RowOrdering to InterpretedOrdering; use newOrdering in SMJ |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-05 16:33:42 -0700 |
| Commit: 618dc63, github.com/apache/spark/pull/7973 |
| |
| [SPARK-9657] Fix return type of getMaxPatternLength |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-05 15:42:18 -0700 |
| Commit: 30e9fcf, github.com/apache/spark/pull/7974 |
| |
| [SPARK-9649] Fix flaky test MasterSuite - randomize ports |
| Andrew Or <andrew@databricks.com> |
| 2015-08-05 14:12:22 -0700 |
| Commit: 05cbf13, github.com/apache/spark/pull/7968 |
| |
| [SPARK-9403] [SQL] Add codegen support in In and InSet |
| Liang-Chi Hsieh <viirya@appier.com>, Tarek Auel <tarek.auel@googlemail.com> |
| 2015-08-05 11:38:56 -0700 |
| Commit: b8136d7, github.com/apache/spark/pull/7893 |
| |
| [SPARK-9141] [SQL] [MINOR] Fix comments of PR #7920 |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-05 11:03:02 -0700 |
| Commit: 19018d5, github.com/apache/spark/pull/7964 |
| |
| [SPARK-9519] [YARN] Confirm stop sc successfully when application was killed |
| linweizhong <linweizhong@huawei.com> |
| 2015-08-05 10:16:12 -0700 |
| Commit: 03bcf62, github.com/apache/spark/pull/7846 |
| |
| [SPARK-9141] [SQL] Remove project collapsing from DataFrame API |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-05 09:01:45 -0700 |
| Commit: 125827a, github.com/apache/spark/pull/7920 |
| |
| [SPARK-6486] [MLLIB] [PYTHON] Add BlockMatrix to PySpark. |
| Mike Dusenberry <mwdusenb@us.ibm.com> |
| 2015-08-05 07:40:50 -0700 |
| Commit: eedb996, github.com/apache/spark/pull/7761 |
| |
| [SPARK-9607] [SPARK-9608] fix zinc-port handling in build/mvn |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-08-05 11:10:47 +0100 |
| Commit: 3500064, github.com/apache/spark/pull/7944 |
| |
| [HOTFIX] Add static import to fix build break from #7676. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-05 02:39:41 -0700 |
| Commit: 93c166a |
| |
| [SPARK-9628][SQL]Rename int to SQLDate, long to SQLTimestamp for better readability |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-05 02:04:28 -0700 |
| Commit: f288cca, github.com/apache/spark/pull/7953 |
| |
| [SPARK-8861][SPARK-8862][SQL] Add basic instrumentation to each SparkPlan operator and add a new SQL tab |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-05 01:51:22 -0700 |
| Commit: ebc3aad, github.com/apache/spark/pull/7774 |
| |
| [SPARK-9601] [DOCS] Fix JavaPairDStream signature for stream-stream and windowed join in streaming guide doc |
| Namit Katariya <katariya.namit@gmail.com> |
| 2015-08-05 01:07:33 -0700 |
| Commit: 6306019, github.com/apache/spark/pull/7935 |
| |
| [SPARK-9360] [SQL] Support BinaryType in PrefixComparators for UnsafeExternalSort |
| Takeshi YAMAMURO <linguin.m.s@gmail.com> |
| 2015-08-05 00:54:31 -0700 |
| Commit: 7fa4195, github.com/apache/spark/pull/7676 |
| |
| [SPARK-9581][SQL] Add unit test for JSON UDT |
| Emiliano Leporati <emiliano.leporati@gmail.com>, Reynold Xin <rxin@databricks.com> |
| 2015-08-05 00:42:08 -0700 |
| Commit: 57596fb, github.com/apache/spark/pull/7917 |
| |
| [SPARK-9217] [STREAMING] Make the kinesis receiver reliable by recording sequence numbers |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-08-05 00:20:26 -0700 |
| Commit: ea23e54, github.com/apache/spark/pull/7825 |
| |
| Update docs/README.md to put all prereqs together. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-04 22:17:14 -0700 |
| Commit: b6e8446, github.com/apache/spark/pull/7951 |
| |
| Add a prerequisites section for building docs |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-08-03 17:00:59 -0700 |
| Commit: 141f034, github.com/apache/spark/pull/7912 |
| |
| [SPARK-9119] [SPARK-8359] [SQL] match Decimal.precision/scale with DecimalType |
| Davies Liu <davies@databricks.com> |
| 2015-08-04 23:12:49 -0700 |
| Commit: 864d5de, github.com/apache/spark/pull/7925 |
| |
| [SPARK-8231] [SQL] Add array_contains |
| Pedro Rodriguez <prodriguez@trulia.com>, Pedro Rodriguez <ski.rodriguez@gmail.com>, Davies Liu <davies@databricks.com> |
| 2015-08-04 22:32:21 -0700 |
| Commit: 28bb977, github.com/apache/spark/pull/7580 |
| |
| [SPARK-9540] [MLLIB] optimize PrefixSpan implementation |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-04 22:28:49 -0700 |
| Commit: bca1967, github.com/apache/spark/pull/7594 |
| |
| [SPARK-9504] [STREAMING] [TESTS] Fix o.a.s.streaming.StreamingContextSuite.stop gracefully again |
| zsxwing <zsxwing@gmail.com> |
| 2015-08-04 20:09:15 -0700 |
| Commit: 6e72d24, github.com/apache/spark/pull/7934 |
| |
| [SPARK-9513] [SQL] [PySpark] Add python API for DataFrame functions |
| Davies Liu <davies@databricks.com> |
| 2015-08-04 19:25:24 -0700 |
| Commit: d196d36, github.com/apache/spark/pull/7922 |
| |
| [SPARK-7119] [SQL] Give script a default serde with the user specific types |
| zhichao.li <zhichao.li@intel.com> |
| 2015-08-04 18:26:05 -0700 |
| Commit: f957c59, github.com/apache/spark/pull/6638 |
| |
| [SPARK-8313] R Spark packages support |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-08-04 18:20:12 -0700 |
| Commit: 11d2311, github.com/apache/spark/pull/7139 |
| |
| [SPARK-9432][SQL] Audit expression unit tests to make sure we pass the proper numeric ranges |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-04 18:19:26 -0700 |
| Commit: 02a6333, github.com/apache/spark/pull/7933 |
| |
| [SPARK-8601] [ML] Add an option to disable standardization for linear regression |
| Holden Karau <holden@pigscanfly.ca>, DB Tsai <dbt@netflix.com> |
| 2015-08-04 18:15:26 -0700 |
| Commit: 2237ddb, github.com/apache/spark/pull/7875 |
| |
| [SPARK-9609] [MLLIB] Fix spelling of Strategy.defaultStrategy |
| Feynman Liang <fliang@databricks.com> |
| 2015-08-04 18:13:18 -0700 |
| Commit: 3350975, github.com/apache/spark/pull/7941 |
| |
| [SPARK-9598][SQL] do not expose generic getter in internal row |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-04 17:05:19 -0700 |
| Commit: 1954a7b, github.com/apache/spark/pull/7932 |
| |
| [SPARK-9586] [ML] Update BinaryClassificationEvaluator to use setRawPredictionCol |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-04 16:52:43 -0700 |
| Commit: cff0fe2, github.com/apache/spark/pull/7921 |
| |
| [SPARK-6485] [MLLIB] [PYTHON] Add CoordinateMatrix/RowMatrix/IndexedRowMatrix to PySpark. |
| Mike Dusenberry <mwdusenb@us.ibm.com> |
| 2015-08-04 16:30:03 -0700 |
| Commit: f4e125a, github.com/apache/spark/pull/7554 |
| |
| [SPARK-9582] [ML] LDA cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-04 15:43:13 -0700 |
| Commit: fe4a4f4, github.com/apache/spark/pull/7916 |
| |
| [SPARK-9447] [ML] [PYTHON] Added HasRawPredictionCol, HasProbabilityCol to RandomForestClassifier |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-04 14:54:26 -0700 |
| Commit: e682ee2, github.com/apache/spark/pull/7903 |
| |
| [SPARK-9602] remove "Akka/Actor" words from comments |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-08-04 14:54:11 -0700 |
| Commit: 560b2da, github.com/apache/spark/pull/7936 |
| |
| [SPARK-9452] [SQL] Support records larger than page size in UnsafeExternalSorter |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-08-04 14:42:11 -0700 |
| Commit: f771a83, github.com/apache/spark/pull/7891 |
| |
| [SPARK-9553][SQL] remove the no-longer-necessary createCode and createStructCode, and replace the usage |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-04 14:40:46 -0700 |
| Commit: 43f6b02, github.com/apache/spark/pull/7890 |
| |
| [SPARK-9606] [SQL] Ignore flaky thrift server tests |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-04 12:19:52 -0700 |
| Commit: be37b1b, github.com/apache/spark/pull/7939 |
| |
| [SPARK-8069] [ML] Add multiclass thresholds for ProbabilisticClassifier |
| Holden Karau <holden@pigscanfly.ca>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-04 10:12:22 -0700 |
| Commit: c5250dd, github.com/apache/spark/pull/7909 |
| |
| [SPARK-9512][SQL] Revert SPARK-9251, Allow evaluation while sorting |
| Michael Armbrust <michael@databricks.com> |
| 2015-08-04 10:07:53 -0700 |
| Commit: a9277cd, github.com/apache/spark/pull/7906 |
| |
| [SPARK-9562] Change reference to amplab/spark-ec2 from mesos/ |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-08-04 09:40:07 -0700 |
| Commit: aa8390d, github.com/apache/spark/pull/7899 |
| |
| [SPARK-9541] [SQL] DataTimeUtils cleanup |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-04 09:09:52 -0700 |
| Commit: d875368, github.com/apache/spark/pull/7870 |
| |
| [SPARK-8246] [SQL] Implement get_json_object |
| Davies Liu <davies@databricks.com>, Yin Huai <yhuai@databricks.com>, Nathan Howell <nhowell@godaddy.com> |
| 2015-08-04 09:07:09 -0700 |
| Commit: b42e13d, github.com/apache/spark/pull/7901 |
| |
| [SPARK-8244] [SQL] string function: find in set |
| Tarek Auel <tarek.auel@googlemail.com>, Davies Liu <davies@databricks.com> |
| 2015-08-04 08:59:42 -0700 |
| Commit: 945da35, github.com/apache/spark/pull/7186 |
| |
| [SPARK-9583] [BUILD] Do not print mvn debug messages to stdout. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-04 22:19:11 +0900 |
| Commit: f44b27a, github.com/apache/spark/pull/7915 |
| |
| [SPARK-2016] [WEBUI] RDD partition table pagination for the RDD Page |
| Carson Wang <carson.wang@intel.com> |
| 2015-08-04 22:12:30 +0900 |
| Commit: 45c8d2b, github.com/apache/spark/pull/7692 |
| |
| [SPARK-8064] [BUILD] Follow-up. Undo change from SPARK-9507 that was accidentally reverted |
| tedyu <yuzhihong@gmail.com> |
| 2015-08-04 12:22:53 +0100 |
| Commit: bd9b752, github.com/apache/spark/pull/7919 |
| |
| [SPARK-9534] [BUILD] Enable javac lint for scalac parity; fix a lot of build warnings, 1.5.0 edition |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-04 12:02:26 +0100 |
| Commit: 5ae6753, github.com/apache/spark/pull/7862 |
| |
| [SPARK-3190] [GRAPHX] Fix VertexRDD.count() overflow regression |
| Ankur Dave <ankurdave@gmail.com> |
| 2015-08-03 23:07:32 -0700 |
| Commit: 29f2d5a, github.com/apache/spark/pull/7923 |
| |
| [SPARK-9521] [DOCS] Addendum. Require Maven 3.3.3+ in the build |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-04 13:48:22 +0900 |
| Commit: 1f7dbcd, github.com/apache/spark/pull/7905 |
| |
| [SPARK-9577][SQL] Surface concrete iterator types in various sort classes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-03 18:47:02 -0700 |
| Commit: ebe42b9, github.com/apache/spark/pull/7911 |
| |
| [SPARK-8416] highlight and topping the executor threads in thread dumping page |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-08-03 18:20:40 -0700 |
| Commit: 93076ae, github.com/apache/spark/pull/7808 |
| |
| [SPARK-9263] Added flags to exclude dependencies when using --packages |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-08-03 17:42:03 -0700 |
| Commit: 3433571, github.com/apache/spark/pull/7599 |
| |
| [SPARK-9483] Fix UTF8String.getPrefix for big-endian. |
| Matthew Brandyberry <mbrandy@us.ibm.com> |
| 2015-08-03 17:36:56 -0700 |
| Commit: 73c863a, github.com/apache/spark/pull/7902 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:59:19 -0700 |
| Commit: 74792e7 |
| |
| Preparing Spark release v1.5.0-snapshot-20150803 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:59:13 -0700 |
| Commit: 7e7147f |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:54:56 -0700 |
| Commit: bc49ca4 |
| |
| Preparing Spark release v1.5.0-snapshot-20150803 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:54:50 -0700 |
| Commit: 4c4f638 |
| |
| [SPARK-8874] [ML] Add missing methods in Word2Vec |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-08-03 16:44:25 -0700 |
| Commit: acda9d9, github.com/apache/spark/pull/7263 |
| |
| Preparing development version 1.5.0-SNAPSHOT |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:37:34 -0700 |
| Commit: 73fab88 |
| |
| Preparing Spark release v1.5.0-snapshot-20150803 |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-08-03 16:37:27 -0700 |
| Commit: 3526420 |
| |
| [SPARK-8064] [SQL] Build against Hive 1.2.1 |
| Steve Loughran <stevel@hortonworks.com>, Cheng Lian <lian@databricks.com>, Michael Armbrust <michael@databricks.com>, Patrick Wendell <patrick@databricks.com> |
| 2015-08-03 15:24:34 -0700 |
| Commit: 6bd12e8, github.com/apache/spark/pull/7191 |
| |
| Revert "[SPARK-9372] [SQL] Filter nulls in join keys" |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-03 14:51:36 -0700 |
| Commit: db58327 |
| |
| [SPARK-8735] [SQL] Expose memory usage for shuffles, joins and aggregations |
| Andrew Or <andrew@databricks.com> |
| 2015-08-03 14:22:07 -0700 |
| Commit: 29756ff, github.com/apache/spark/pull/7770 |
| |
| [SPARK-9191] [ML] [Doc] Add ml.PCA user guide and code examples |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-03 13:58:00 -0700 |
| Commit: e7329ab, github.com/apache/spark/pull/7522 |
| |
| [SPARK-9544] [MLLIB] add Python API for RFormula |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-03 13:59:35 -0700 |
| Commit: dc0c8c9, github.com/apache/spark/pull/7879 |
| |
| [SPARK-9558][DOCS]Update docs to follow the increase of memory defaults. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-08-03 12:53:44 -0700 |
| Commit: 444058d, github.com/apache/spark/pull/7896 |
| |
| [SPARK-5133] [ML] Added featureImportance to RandomForestClassifier and Regressor |
| Joseph K. Bradley <joseph@databricks.com>, Feynman Liang <fliang@databricks.com> |
| 2015-08-03 12:17:46 -0700 |
| Commit: b3117d3, github.com/apache/spark/pull/7838 |
| |
| [SPARK-9554] [SQL] Enables in-memory partition pruning by default |
| Cheng Lian <lian@databricks.com> |
| 2015-08-03 12:06:58 -0700 |
| Commit: 6d46e9b, github.com/apache/spark/pull/7895 |
| |
| [SQL][minor] Simplify UnsafeRow.calculateBitSetWidthInBytes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-03 11:22:02 -0700 |
| Commit: 5452e93, github.com/apache/spark/pull/7897 |
| |
| [SPARK-9511] [SQL] Fixed Table Name Parsing |
| Joseph Batchik <joseph.batchik@cloudera.com> |
| 2015-08-03 11:17:38 -0700 |
| Commit: 4de833e, github.com/apache/spark/pull/7844 |
| |
| [SPARK-1855] Local checkpointing |
| Andrew Or <andrew@databricks.com> |
| 2015-08-03 10:58:37 -0700 |
| Commit: b41a327, github.com/apache/spark/pull/7279 |
| |
| [SPARK-9528] [ML] Changed RandomForestClassifier to extend ProbabilisticClassifier |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-08-03 10:46:34 -0700 |
| Commit: 69f5a7c, github.com/apache/spark/pull/7859 |
| |
| Two minor comments from code review on 191bf2689. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-03 04:26:18 -0700 |
| Commit: 8be198c |
| |
| [SPARK-9518] [SQL] cleanup generated UnsafeRowJoiner and fix bug |
| Davies Liu <davies@databricks.com> |
| 2015-08-03 04:23:26 -0700 |
| Commit: 191bf26, github.com/apache/spark/pull/7892 |
| |
| [SPARK-9551][SQL] add a cheap version of copy for UnsafeRow to reuse a copy buffer |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-03 04:21:15 -0700 |
| Commit: 137f478, github.com/apache/spark/pull/7885 |
| |
| [SPARK-8873] [MESOS] Clean up shuffle files if external shuffle service is used |
| Timothy Chen <tnachen@gmail.com>, Andrew Or <andrew@databricks.com> |
| 2015-08-03 01:55:58 -0700 |
| Commit: 95dccc6, github.com/apache/spark/pull/7881 |
| |
| [SPARK-9240] [SQL] Hybrid aggregate operator using unsafe row |
| Yin Huai <yhuai@databricks.com> |
| 2015-08-03 00:23:08 -0700 |
| Commit: 1ebd41b, github.com/apache/spark/pull/7813 |
| |
| [SPARK-9549][SQL] fix bugs in expressions |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-08-03 00:15:24 -0700 |
| Commit: 98d6d9c, github.com/apache/spark/pull/7882 |
| |
| [SPARK-9404][SPARK-9542][SQL] unsafe array data and map data |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-02 23:41:16 -0700 |
| Commit: 608353c, github.com/apache/spark/pull/7752 |
| |
| [SPARK-9372] [SQL] Filter nulls in join keys |
| Yin Huai <yhuai@databricks.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-08-02 23:32:09 -0700 |
| Commit: 687c8c3, github.com/apache/spark/pull/7768 |
| |
| [SPARK-9536] [SPARK-9537] [SPARK-9538] [ML] [PYSPARK] ml.classification support raw and probability prediction for PySpark |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-08-02 22:19:27 -0700 |
| Commit: 4cdd8ec, github.com/apache/spark/pull/7866 |
| |
| [SPARK-2205] [SQL] Avoid unnecessary exchange operators in multi-way joins |
| Yin Huai <yhuai@databricks.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-08-02 20:44:23 -0700 |
| Commit: 114ff92, github.com/apache/spark/pull/7773 |
| |
| [SPARK-9546][SQL] Centralize orderable data type checking. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-02 20:12:03 -0700 |
| Commit: 30e8911, github.com/apache/spark/pull/7880 |
| |
| [SPARK-9535][SQL][DOCS] Modify document for codegen. |
| KaiXinXiaoLei <huleilei1@huawei.com>, Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-08-02 20:04:21 -0700 |
| Commit: 536d2ad, github.com/apache/spark/pull/7142 |
| |
| [SPARK-9543][SQL] Add randomized testing for UnsafeKVExternalSorter. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-02 17:54:30 -0700 |
| Commit: 9d03ad9, github.com/apache/spark/pull/7873 |
| |
| [SPARK-7937][SQL] Support comparison on StructType |
| Liang-Chi Hsieh <viirya@appier.com>, Liang-Chi Hsieh <viirya@gmail.com>, Reynold Xin <rxin@databricks.com> |
| 2015-08-02 17:53:44 -0700 |
| Commit: 0722f43, github.com/apache/spark/pull/6519. |
| |
| [SPARK-9531] [SQL] UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter |
| Reynold Xin <rxin@databricks.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-08-02 12:32:14 -0700 |
| Commit: 2e981b7, github.com/apache/spark/pull/7860 |
| |
| [SPARK-9527] [MLLIB] add PrefixSpanModel and make PrefixSpan Java friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-08-02 11:50:17 -0700 |
| Commit: 66924ff, github.com/apache/spark/pull/7869 |
| |
| [SPARK-9208][SQL] Sort DataFrame functions alphabetically. |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-02 11:36:11 -0700 |
| Commit: 8eafa2a, github.com/apache/spark/pull/7861 |
| |
| [SPARK-9149] [ML] [EXAMPLES] Add an example of spark.ml KMeans |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-08-02 09:00:32 +0100 |
| Commit: 244016a, github.com/apache/spark/pull/7697 |
| |
| [SPARK-9521] [BUILD] Require Maven 3.3.3+ in the build |
| Sean Owen <sowen@cloudera.com> |
| 2015-08-02 08:56:35 +0100 |
| Commit: 9d1c025, github.com/apache/spark/pull/7852 |
| |
| [SPARK-9529] [SQL] improve TungstenSort on DecimalType |
| Davies Liu <davies@databricks.com> |
| 2015-08-01 23:36:06 -0700 |
| Commit: 16b928c, github.com/apache/spark/pull/7857 |
| |
| [SPARK-9000] [MLLIB] Support generic item types in PrefixSpan |
| Feynman Liang <fliang@databricks.com>, masaki rikitoku <rikima3132@gmail.com> |
| 2015-08-01 23:11:25 -0700 |
| Commit: 28d944e, github.com/apache/spark/pull/7400 |
| |
| [SPARK-9459] [SQL] use generated FromUnsafeProjection to do deep copy for UTF8String and struct |
| Davies Liu <davies@databricks.com> |
| 2015-08-01 21:50:42 -0700 |
| Commit: 57084e0, github.com/apache/spark/pull/7840 |
| |
| [SPARK-8185] [SPARK-8188] [SPARK-8191] [SQL] function datediff, to_utc_timestamp, from_utc_timestamp |
| Davies Liu <davies@databricks.com>, Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-08-01 21:46:46 -0700 |
| Commit: c1b0cbd, github.com/apache/spark/pull/7847 |
| |
| [SPARK-8269] [SQL] string function: initcap |
| HuJiayin <jiayin.hu@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-08-01 21:44:57 -0700 |
| Commit: 00cd92f, github.com/apache/spark/pull/7208 |
| |
| [SPARK-9495] prefix of DateType/TimestampType |
| Davies Liu <davies@databricks.com> |
| 2015-08-01 18:22:46 -0700 |
| Commit: 5d9e33d, github.com/apache/spark/pull/7856 |
| |
| [SPARK-9530] [MLLIB] ScalaDoc should not indicate LDAModel.describeTopics and DistributedLDAModel.topDocumentsPerTopic as approximate |
| Meihua Wu <meihuawu@umich.edu> |
| 2015-08-01 17:13:28 -0700 |
| Commit: 84a6982, github.com/apache/spark/pull/7858 |
| |
| [SPARK-9520] [SQL] Support in-place sort in UnsafeFixedWidthAggregationMap |
| Reynold Xin <rxin@databricks.com> |
| 2015-08-01 13:20:26 -0700 |
| Commit: 3d1535d, github.com/apache/spark/pull/7849 |
| |
| [SPARK-9491] Avoid fetching HBase tokens when not needed. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-08-01 13:06:50 -0700 |
| Commit: df733cb, github.com/apache/spark/pull/7810 |
| |
| [SPARK-4751] Dynamic allocation in standalone mode |
| Andrew Or <andrew@databricks.com> |
| 2015-08-01 11:57:14 -0700 |
| Commit: 6688ba6, github.com/apache/spark/pull/7532 |
| |
| [SPARK-8263] [SQL] substr/substring should also support binary type |
| zhichao.li <zhichao.li@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-08-01 08:48:46 -0700 |
| Commit: c5166f7, github.com/apache/spark/pull/7641 |
| |
| [SPARK-8232] [SQL] Add sort_array support |
| Cheng Hao <hao.cheng@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-08-01 08:32:29 -0700 |
| Commit: cf6c9ca, github.com/apache/spark/pull/7851 |
| |
| [SPARK-8169] [ML] Add StopWordsRemover as a transformer |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-08-01 02:31:28 -0700 |
| Commit: 8765665, github.com/apache/spark/pull/6742 |
| |
| [SPARK-8999] [MLLIB] PrefixSpan non-temporal sequences |
| zhangjiajin <zhangjiajin@huawei.com>, Feynman Liang <fliang@databricks.com>, zhang jiajin <zhangjiajin@huawei.com> |
| 2015-08-01 01:56:27 -0700 |
| Commit: d2a9b66, github.com/apache/spark/pull/7646 |
| |
| [SPARK-7446] [MLLIB] Add inverse transform for string indexer |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-08-01 01:09:38 -0700 |
| Commit: 6503897, github.com/apache/spark/pull/6339 |
| |
| Revert "[SPARK-8232] [SQL] Add sort_array support" |
| Davies Liu <davies.liu@gmail.com> |
| 2015-08-01 00:41:15 -0700 |
| Commit: 60ea7ab |
| |
| [SPARK-9480][SQL] add MapData and cleanup internal row stuff |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-08-01 00:17:15 -0700 |
| Commit: 1d59a41, github.com/apache/spark/pull/7799 |
| |
| [SPARK-9517][SQL] BytesToBytesMap should encode data the same way as UnsafeExternalSorter |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-31 23:55:16 -0700 |
| Commit: d90f2cf, github.com/apache/spark/pull/7845 |
| |
| [SPARK-8232] [SQL] Add sort_array support |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-31 23:11:22 -0700 |
| Commit: 67ad4e2, github.com/apache/spark/pull/7581 |
| |
| [SPARK-9415][SQL] Throw AnalysisException when using MapType on Join and Aggregate |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-31 22:26:30 -0700 |
| Commit: 3320b0b, github.com/apache/spark/pull/7819 |
| |
| [SPARK-9464][SQL] Property checks for UTF8String |
| Josh Rosen <joshrosen@databricks.com>, Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-31 21:19:23 -0700 |
| Commit: 14f2634, github.com/apache/spark/pull/7830 |
| |
| [SPARK-8264][SQL]add substring_index function |
| zhichao.li <zhichao.li@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-07-31 21:18:01 -0700 |
| Commit: 6996bd2, github.com/apache/spark/pull/7533 |
| |
| [SPARK-9358][SQL] Code generation for UnsafeRow joiner. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-31 21:09:00 -0700 |
| Commit: 03377d2, github.com/apache/spark/pull/7821 |
| |
| [SPARK-9318] [SPARK-9320] [SPARKR] Aliases for merge and summary functions on DataFrames |
| Hossein <hossein@databricks.com> |
| 2015-07-31 19:24:00 -0700 |
| Commit: 712f5b7, github.com/apache/spark/pull/7806 |
| |
| [SPARK-9451] [SQL] Support entries larger than default page size in BytesToBytesMap & integrate with ShuffleMemoryManager |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-31 19:19:27 -0700 |
| Commit: 8cb415a, github.com/apache/spark/pull/7762 |
| |
| [SPARK-8936] [MLLIB] OnlineLDA document-topic Dirichlet hyperparameter optimization |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-31 18:36:22 -0700 |
| Commit: f51fd6f, github.com/apache/spark/pull/7836 |
| |
| [SPARK-8271][SQL]string function: soundex |
| HuJiayin <jiayin.hu@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-07-31 16:05:26 -0700 |
| Commit: 4d5a6e7, github.com/apache/spark/pull/7812 |
| |
| [SPARK-9233] [SQL] Enable code-gen in window function unit tests |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-31 14:13:06 -0700 |
| Commit: 3fc0cb9, github.com/apache/spark/pull/7832 |
| |
| [SPARK-9324] [SPARK-9322] [SPARK-9321] [SPARKR] Some aliases for R-like functions in DataFrames |
| Hossein <hossein@databricks.com> |
| 2015-07-31 14:07:41 -0700 |
| Commit: 710c2b5, github.com/apache/spark/pull/7764 |
| |
| [SPARK-9510] [SPARKR] Remaining SparkR style fixes |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-07-31 14:02:44 -0700 |
| Commit: 82f47b8, github.com/apache/spark/pull/7834 |
| |
| [SPARK-9507] [BUILD] Remove dependency reduced POM hack now that shade plugin is updated |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-31 21:51:55 +0100 |
| Commit: 6e5fd61, github.com/apache/spark/pull/7826 |
| |
| [SPARK-9490] [DOCS] [MLLIB] MLlib evaluation metrics guide example python code uses deprecated print statement |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-31 13:45:28 -0700 |
| Commit: 873ab0f, github.com/apache/spark/pull/7822 |
| |
| [SPARK-9466] [SQL] Increate two timeouts in CliSuite. |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-31 13:45:12 -0700 |
| Commit: 815c824, github.com/apache/spark/pull/7777 |
| |
| [SPARK-9308] [ML] ml.NaiveBayesModel support predicting class probabilities |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-31 13:11:42 -0700 |
| Commit: fbef566, github.com/apache/spark/pull/7672 |
| |
| [SPARK-9056] [STREAMING] Rename configuration `spark.streaming.minRememberDuration` to `spark.streaming.fileStream.minRememberDuration` |
| Sameer Abhyankar <sabhyankar@sabhyankar-MBP.local>, Sameer Abhyankar <sabhyankar@sabhyankar-MBP.Samavihome> |
| 2015-07-31 13:08:55 -0700 |
| Commit: 060c79a, github.com/apache/spark/pull/7740 |
| |
| [SPARK-9246] [MLLIB] DistributedLDAModel predict top docs per topic |
| Meihua Wu <meihuawu@umich.edu> |
| 2015-07-31 13:01:10 -0700 |
| Commit: 3c0d2e5, github.com/apache/spark/pull/7769 |
| |
| [SPARK-9202] capping maximum number of executor&driver information kept in Worker |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-07-31 20:27:00 +0100 |
| Commit: c068666, github.com/apache/spark/pull/7714 |
| |
| [SPARK-9481] Add logLikelihood to LocalLDAModel |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-31 12:12:22 -0700 |
| Commit: a8340fa, github.com/apache/spark/pull/7801 |
| |
| [SPARK-9504] [STREAMING] [TESTS] Use eventually to fix the flaky test |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-31 12:10:55 -0700 |
| Commit: d046347, github.com/apache/spark/pull/7823 |
| |
| [SPARK-8564] [STREAMING] Add the Python API for Kinesis |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-31 12:09:48 -0700 |
| Commit: 3afc1de, github.com/apache/spark/pull/6955 |
| |
| [SPARK-8640] [SQL] Enable Processing of Multiple Window Frames in a Single Window Operator |
| Herman van Hovell <hvanhovell@questtec.nl> |
| 2015-07-31 12:07:18 -0700 |
| Commit: 39ab199, github.com/apache/spark/pull/7515 |
| |
| [SPARK-8979] Add a PID based rate estimator |
| Iulian Dragos <jaguarul@gmail.com>, FranƧois Garillot <francois@garillot.net> |
| 2015-07-31 12:04:03 -0700 |
| Commit: 0a1d2ca, github.com/apache/spark/pull/7648 |
| |
| [SPARK-6885] [ML] decision tree support predict class probabilities |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-31 11:56:52 -0700 |
| Commit: e8bdcde, github.com/apache/spark/pull/7694 |
| |
| [SPARK-9231] [MLLIB] DistributedLDAModel method for top topics per document |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-31 11:50:15 -0700 |
| Commit: 4011a94, github.com/apache/spark/pull/7785 |
| |
| [SPARK-9471] [ML] Multilayer Perceptron |
| Alexander Ulanov <nashb@yandex.ru>, Bert Greevenbosch <opensrc@bertgreevenbosch.nl> |
| 2015-07-31 11:22:40 -0700 |
| Commit: 6add4ed, github.com/apache/spark/pull/7621 |
| |
| [SQL] address comments for to_date/trunc |
| Davies Liu <davies@databricks.com> |
| 2015-07-31 11:07:34 -0700 |
| Commit: 0024da9, github.com/apache/spark/pull/7817 |
| |
| [SPARK-9446] Clear Active SparkContext in stop() method |
| tedyu <yuzhihong@gmail.com> |
| 2015-07-31 18:16:55 +0100 |
| Commit: 27ae851, github.com/apache/spark/pull/7756 |
| |
| [SPARK-9497] [SPARK-9509] [CORE] Use ask instead of askWithRetry |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-31 09:34:10 -0700 |
| Commit: 04a49ed, github.com/apache/spark/pull/7824 |
| |
| [SPARK-9053] [SPARKR] Fix spaces around parens, infix operators etc. |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-31 09:33:38 -0700 |
| Commit: fc0e57e, github.com/apache/spark/pull/7584 |
| |
| [SPARK-9500] add TernaryExpression to simplify ternary expressions |
| Davies Liu <davies@databricks.com> |
| 2015-07-31 08:28:05 -0700 |
| Commit: 6bba750, github.com/apache/spark/pull/7816 |
| |
| [SPARK-9496][SQL]do not print the password in config |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-07-30 23:50:06 -0700 |
| Commit: a3a85d7, github.com/apache/spark/pull/7815 |
| |
| [SPARK-9152][SQL] Implement code generation for Like and RLike |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-30 23:05:58 -0700 |
| Commit: 0244170, github.com/apache/spark/pull/7561 |
| |
| [SPARK-9214] [ML] [PySpark] support ml.NaiveBayes for Python |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-30 23:03:48 -0700 |
| Commit: 69b62f7, github.com/apache/spark/pull/7568 |
| |
| [SPARK-7690] [ML] Multiclass classification Evaluator |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-07-30 23:02:11 -0700 |
| Commit: 4e5919b, github.com/apache/spark/pull/7475 |
| |
| [SPARK-8176] [SPARK-8197] [SQL] function to_date/ trunc |
| Daoyuan Wang <daoyuan.wang@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-07-30 19:22:38 -0700 |
| Commit: 83670fc, github.com/apache/spark/pull/6988 |
| |
| [SPARK-9472] [STREAMING] consistent hadoop configuration, streaming only |
| cody koeninger <cody@koeninger.org> |
| 2015-07-30 17:44:20 -0700 |
| Commit: 9307f56, github.com/apache/spark/pull/7772 |
| |
| [SPARK-9489] Remove unnecessary compatibility and requirements checks from Exchange |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-30 17:38:48 -0700 |
| Commit: 3c66ff7, github.com/apache/spark/pull/7807 |
| |
| [SPARK-9077] [MLLIB] Improve error message for decision trees when numExamples < maxCategoriesPerFeature |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-30 17:26:18 -0700 |
| Commit: 65fa418, github.com/apache/spark/pull/7800 |
| |
| [SPARK-6319][SQL] Throw AnalysisException when using BinaryType on Join and Aggregate |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-30 17:22:51 -0700 |
| Commit: 351eda0, github.com/apache/spark/pull/7787 |
| |
| [SPARK-9425] [SQL] support DecimalType in UnsafeRow |
| Davies Liu <davies@databricks.com> |
| 2015-07-30 17:18:32 -0700 |
| Commit: 0b1a464, github.com/apache/spark/pull/7758 |
| |
| [SPARK-9458][SPARK-9469][SQL] Code generate prefix computation in sorting & moves unsafe conversion out of TungstenSort. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-30 17:17:27 -0700 |
| Commit: e7a0976, github.com/apache/spark/pull/7803 |
| |
| [SPARK-7157][SQL] add sampleBy to DataFrame |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-30 17:16:03 -0700 |
| Commit: df32669, github.com/apache/spark/pull/7755 |
| |
| [SPARK-9408] [PYSPARK] [MLLIB] Refactor linalg.py to /linalg |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-30 16:57:38 -0700 |
| Commit: ca71cc8, github.com/apache/spark/pull/7731 |
| |
| [STREAMING] [TEST] [HOTFIX] Fixed Kinesis test to not throw weird errors when Kinesis tests are enabled without AWS keys |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-30 16:44:02 -0700 |
| Commit: 1afdeb7, github.com/apache/spark/pull/7809 |
| |
| [SPARK-9199] [CORE] Update Tachyon dependency from 0.6.4 -> 0.7.0 |
| Calvin Jia <jia.calvin@gmail.com> |
| 2015-07-30 16:32:40 -0700 |
| Commit: 04c8409, github.com/apache/spark/pull/7577 |
| |
| [SPARK-8742] [SPARKR] Improve SparkR error messages for DataFrame API |
| Hossein <hossein@databricks.com> |
| 2015-07-30 16:16:17 -0700 |
| Commit: 157840d, github.com/apache/spark/pull/7742 |
| |
| [SPARK-9463] [ML] Expose model coefficients with names in SparkR RFormula |
| Eric Liang <ekl@databricks.com> |
| 2015-07-30 16:15:43 -0700 |
| Commit: e7905a9, github.com/apache/spark/pull/7771 |
| |
| [SPARK-6684] [MLLIB] [ML] Add checkpointing to GBTs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-30 16:04:23 -0700 |
| Commit: be7be6d, github.com/apache/spark/pull/7804 |
| |
| [SPARK-8671] [ML] Added isotonic regression to the pipeline API. |
| martinzapletal <zapletal-martin@email.cz> |
| 2015-07-30 15:57:14 -0700 |
| Commit: 7f7a319, github.com/apache/spark/pull/7517 |
| |
| [SPARK-9479] [STREAMING] [TESTS] Fix ReceiverTrackerSuite failure for maven build and other potential test failures in Streaming |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-30 15:39:46 -0700 |
| Commit: 0dbd696, github.com/apache/spark/pull/7797 |
| |
| [SPARK-9454] Change LDASuite tests to use vector comparisons |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-30 14:08:59 -0700 |
| Commit: 89cda69, github.com/apache/spark/pull/7775 |
| |
| [SPARK-8186] [SPARK-8187] [SPARK-8194] [SPARK-8198] [SPARK-9133] [SPARK-9290] [SQL] functions: date_add, date_sub, add_months, months_between, time-interval calculation |
| Daoyuan Wang <daoyuan.wang@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-07-30 13:21:46 -0700 |
| Commit: 1abf7dc, github.com/apache/spark/pull/7589 |
| |
| [SPARK-5567] [MLLIB] Add predict method to LocalLDAModel |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-30 13:17:54 -0700 |
| Commit: d8cfd53, github.com/apache/spark/pull/7760 |
| |
| [SPARK-9460] Fix prefix generation for UTF8String. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-30 13:09:43 -0700 |
| Commit: a20e743, github.com/apache/spark/pull/7789 |
| |
| [SPARK-8174] [SPARK-8175] [SQL] function unix_timestamp, from_unixtime |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-30 11:13:15 -0700 |
| Commit: 6d94bf6, github.com/apache/spark/pull/7644 |
| |
| [SPARK-9437] [CORE] avoid overflow in SizeEstimator |
| Imran Rashid <irashid@cloudera.com> |
| 2015-07-30 10:46:26 -0700 |
| Commit: 06b6a07, github.com/apache/spark/pull/7750 |
| |
| [SPARK-8850] [SQL] Enable Unsafe mode by default |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-30 10:45:32 -0700 |
| Commit: 520ec0f, github.com/apache/spark/pull/7564 |
| |
| [SPARK-9388] [YARN] Make executor info log messages easier to read. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-30 10:40:04 -0700 |
| Commit: ab78b1d, github.com/apache/spark/pull/7706 |
| |
| [SPARK-8297] [YARN] Scheduler backend is not notified in case node fails in YARN |
| Mridul Muralidharan <mridulm@yahoo-inc.com>, Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-30 10:37:53 -0700 |
| Commit: e535346, github.com/apache/spark/pull/7431 |
| |
| [SPARK-9361] [SQL] Refactor new aggregation code to reduce the times of checking compatibility |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-30 10:30:37 -0700 |
| Commit: 5363ed7, github.com/apache/spark/pull/7677 |
| |
| [SPARK-9267] [CORE] Retire stringify(Partial)?Value from Accumulators |
| FranƧois Garillot <francois@garillot.net> |
| 2015-07-30 18:14:08 +0100 |
| Commit: 7bbf02f, github.com/apache/spark/pull/7678 |
| |
| [SPARK-9390][SQL] create a wrapper for array type |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-30 10:04:30 -0700 |
| Commit: c0cc0ea, github.com/apache/spark/pull/7724 |
| |
| [SPARK-9248] [SPARKR] Closing curly-braces should always be on their own line |
| Yuu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-30 10:00:27 -0700 |
| Commit: 7492a33, github.com/apache/spark/pull/7795 |
| |
| [MINOR] [MLLIB] fix doc for RegexTokenizer |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-30 09:45:17 -0700 |
| Commit: 81464f2, github.com/apache/spark/pull/7798 |
| |
| [SPARK-9277] [MLLIB] SparseVector constructor must throw an error when declared number of elements less than array length |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-30 09:19:55 -0700 |
| Commit: ed3cb1d, github.com/apache/spark/pull/7794 |
| |
| [SPARK-9225] [MLLIB] LDASuite needs unit tests for empty documents |
| Meihua Wu <meihuawu@umich.edu> |
| 2015-07-30 08:52:01 -0700 |
| Commit: a6e53a9, github.com/apache/spark/pull/7620 |
| |
| [SPARK-] [MLLIB] minor fix on tokenizer doc |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-30 08:20:52 -0700 |
| Commit: 9c0501c, github.com/apache/spark/pull/7791 |
| |
| [SPARK-8998] [MLLIB] Distribute PrefixSpan computation for large projected databases |
| zhangjiajin <zhangjiajin@huawei.com>, Feynman Liang <fliang@databricks.com>, zhang jiajin <zhangjiajin@huawei.com> |
| 2015-07-30 08:14:09 -0700 |
| Commit: d212a31, github.com/apache/spark/pull/7412 |
| |
| [SPARK-5561] [MLLIB] Generalized PeriodicCheckpointer for RDDs and Graphs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-30 07:56:15 -0700 |
| Commit: c581593, github.com/apache/spark/pull/7728 |
| |
| [SPARK-7368] [MLLIB] Add QR decomposition for RowMatrix |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-30 07:49:10 -0700 |
| Commit: d31c618, github.com/apache/spark/pull/5909 |
| |
| [SPARK-8838] [SQL] Add config to enable/disable merging part-files when merging parquet schema |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-30 17:45:30 +0800 |
| Commit: 6175d6c, github.com/apache/spark/pull/7238 |
| |
| Fix flaky HashedRelationSuite |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-30 01:21:39 -0700 |
| Commit: 5ba2d44, github.com/apache/spark/pull/7784 |
| |
| Revert "[SPARK-9458] Avoid object allocation in prefix generation." |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-30 01:04:24 -0700 |
| Commit: 4a8bb9d |
| |
| [SPARK-9335] [TESTS] Enable Kinesis tests only when files in extras/kinesis-asl are changed |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-30 00:46:36 -0700 |
| Commit: 76f2e39, github.com/apache/spark/pull/7711 |
| |
| [SPARK-8005][SQL] Input file name |
| Joseph Batchik <josephbatchik@gmail.com> |
| 2015-07-29 23:35:55 -0700 |
| Commit: 1221849, github.com/apache/spark/pull/7743 |
| |
| [SPARK-9428] [SQL] Add test cases for null inputs for expression unit tests |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-29 23:24:20 -0700 |
| Commit: e127ec3, github.com/apache/spark/pull/7748 |
| |
| HOTFIX: disable HashedRelationSuite. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 22:51:06 -0700 |
| Commit: 712465b |
| |
| [SPARK-9116] [SQL] [PYSPARK] support Python only UDT in __main__ |
| Davies Liu <davies@databricks.com> |
| 2015-07-29 22:30:49 -0700 |
| Commit: e044705, github.com/apache/spark/pull/7453 |
| |
| Fix reference to self.names in StructType |
| Alex Angelini <alex.louis.angelini@gmail.com> |
| 2015-07-29 22:25:38 -0700 |
| Commit: f5dd113, github.com/apache/spark/pull/7766 |
| |
| [SPARK-9462][SQL] Initialize nondeterministic expressions in code gen fallback mode. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 21:24:47 -0700 |
| Commit: 27850af, github.com/apache/spark/pull/7767 |
| |
| [SPARK-9460] Avoid byte array allocation in StringPrefixComparator. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 21:18:43 -0700 |
| Commit: 07fd7d3, github.com/apache/spark/pull/7765 |
| |
| [SPARK-9458] Avoid object allocation in prefix generation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 20:46:03 -0700 |
| Commit: 9514d87, github.com/apache/spark/pull/7763 |
| |
| [SPARK-9440] [MLLIB] Add hyperparameters to LocalLDAModel save/load |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-29 19:02:15 -0700 |
| Commit: a200e64, github.com/apache/spark/pull/7757 |
| |
| [SPARK-6129] [MLLIB] [DOCS] Added user guide for evaluation metrics |
| sethah <seth.hendrickson16@gmail.com> |
| 2015-07-29 18:23:07 -0700 |
| Commit: 2a9fe4a, github.com/apache/spark/pull/7655 |
| |
| [SPARK-9016] [ML] make random forest classifiers implement classification trait |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-29 18:18:29 -0700 |
| Commit: 37c2d19, github.com/apache/spark/pull/7432 |
| |
| [SPARK-8921] [MLLIB] Add @since tags to mllib.stat |
| Bimal Tandel <bimal@bimal-MBP.local> |
| 2015-07-29 16:54:58 -0700 |
| Commit: 103d8cc, github.com/apache/spark/pull/7730 |
| |
| [SPARK-9448][SQL] GenerateUnsafeProjection should not share expressions across instances. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 16:49:02 -0700 |
| Commit: 8650596, github.com/apache/spark/pull/7759 |
| |
| [SPARK-6793] [MLLIB] OnlineLDAOptimizer LDA perplexity |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-29 16:20:20 -0700 |
| Commit: 2cc212d, github.com/apache/spark/pull/7705 |
| |
| [SPARK-9411] [SQL] Make Tungsten page sizes configurable |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-29 16:00:30 -0700 |
| Commit: 1b0099f, github.com/apache/spark/pull/7741 |
| |
| [SPARK-9436] [GRAPHX] Pregel simplification patch |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-07-29 13:59:00 -0700 |
| Commit: b715933, github.com/apache/spark/pull/7749 |
| |
| [SPARK-9430][SQL] Rename IntervalType to CalendarIntervalType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 13:49:22 -0700 |
| Commit: 5340dfa, github.com/apache/spark/pull/7745 |
| |
| [SPARK-8977] [STREAMING] Defines the RateEstimator interface, and impements the RateController |
| Iulian Dragos <jaguarul@gmail.com>, FranƧois Garillot <francois@garillot.net> |
| 2015-07-29 13:47:37 -0700 |
| Commit: 819be46, github.com/apache/spark/pull/7600 |
| |
| [SPARK-746] [CORE] Added Avro Serialization to Kryo |
| Joseph Batchik <joseph.batchik@cloudera.com>, Joseph Batchik <josephbatchik@gmail.com> |
| 2015-07-29 14:02:32 -0500 |
| Commit: 069a4c4, github.com/apache/spark/pull/7004 |
| |
| [SPARK-9127][SQL] Rand/Randn codegen fails with long seed. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-29 09:36:22 -0700 |
| Commit: 9790694, github.com/apache/spark/pull/7747 |
| |
| [SPARK-9251][SQL] do not order by expressions which still need evaluation |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-29 00:08:45 -0700 |
| Commit: 708794e, github.com/apache/spark/pull/7593 |
| |
| [SPARK-9281] [SQL] use decimal or double when parsing SQL |
| Davies Liu <davies@databricks.com> |
| 2015-07-28 22:51:08 -0700 |
| Commit: 15667a0, github.com/apache/spark/pull/7642 |
| |
| [SPARK-9398] [SQL] Datetime cleanup |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-28 22:38:28 -0700 |
| Commit: 6309b93, github.com/apache/spark/pull/7725 |
| |
| [SPARK-9419] ShuffleMemoryManager and MemoryStore should track memory on a per-task, not per-thread, basis |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-28 21:53:28 -0700 |
| Commit: ea49705, github.com/apache/spark/pull/7734 |
| |
| [SPARK-8608][SPARK-8609][SPARK-9083][SQL] reset mutable states of nondeterministic expression before evaluation and fix PullOutNondeterministic |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-28 21:37:50 -0700 |
| Commit: 429b2f0, github.com/apache/spark/pull/7674 |
| |
| [SPARK-9422] [SQL] Remove the placeholder attributes used in the aggregation buffers |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-28 19:01:25 -0700 |
| Commit: 3744b7f, github.com/apache/spark/pull/7737 |
| |
| [SPARK-9421] Fix null-handling bugs in UnsafeRow.getDouble, getFloat(), and get(ordinal, dataType) |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-28 17:51:58 -0700 |
| Commit: e78ec1a, github.com/apache/spark/pull/7736 |
| |
| [SPARK-9418][SQL] Use sort-merge join as the default shuffle join. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 17:42:35 -0700 |
| Commit: 6662ee2, github.com/apache/spark/pull/7733 |
| |
| [SPARK-9420][SQL] Move expressions in sql/core package to catalyst. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 17:03:59 -0700 |
| Commit: b7f5411, github.com/apache/spark/pull/7735 |
| |
| [STREAMING] [HOTFIX] Ignore ReceiverTrackerSuite flaky test |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-28 16:41:56 -0700 |
| Commit: c5ed369, github.com/apache/spark/pull/7738 |
| |
| [SPARK-9393] [SQL] Fix several error-handling bugs in ScriptTransform operator |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-28 16:04:48 -0700 |
| Commit: 59b92ad, github.com/apache/spark/pull/7710 |
| |
| [SPARK-9247] [SQL] Use BytesToBytesMap for broadcast join |
| Davies Liu <davies@databricks.com> |
| 2015-07-28 15:56:19 -0700 |
| Commit: 2182552, github.com/apache/spark/pull/7592 |
| |
| [SPARK-7105] [PYSPARK] [MLLIB] Support model save/load in GMM |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-28 15:00:25 -0700 |
| Commit: 198d181, github.com/apache/spark/pull/7617 |
| |
| [SPARK-8003][SQL] Added virtual column support to Spark |
| Joseph Batchik <josephbatchik@gmail.com>, JD <jd@csh.rit.edu> |
| 2015-07-28 14:39:25 -0700 |
| Commit: b88b868, github.com/apache/spark/pull/7478 |
| |
| [SPARK-9391] [ML] Support minus, dot, and intercept operators in SparkR RFormula |
| Eric Liang <ekl@databricks.com> |
| 2015-07-28 14:16:57 -0700 |
| Commit: 8d5bb52, github.com/apache/spark/pull/7707 |
| |
| [SPARK-9196] [SQL] Ignore test DatetimeExpressionsSuite: function current_timestamp. |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-28 13:16:48 -0700 |
| Commit: 6cdcc21, github.com/apache/spark/pull/7727 |
| |
| [SPARK-9327] [DOCS] Fix documentation about classpath config options. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-28 11:48:56 -0700 |
| Commit: 31ec6a8, github.com/apache/spark/pull/7651 |
| |
| Use vector-friendly comparison for packages argument. |
| trestletech <jeff.allen@trestletechnology.net> |
| 2015-07-28 10:45:19 -0700 |
| Commit: 6143234, github.com/apache/spark/pull/7701 |
| |
| [SPARK-9397] DataFrame should provide an API to find source data files if applicable |
| Aaron Davidson <aaron@databricks.com> |
| 2015-07-28 10:12:09 -0700 |
| Commit: 35ef853, github.com/apache/spark/pull/7717 |
| |
| [SPARK-8196][SQL] Fix null handling & documentation for next_day. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 09:43:39 -0700 |
| Commit: 9bbe017, github.com/apache/spark/pull/7718 |
| |
| [SPARK-9373][SQL] follow up for StructType support in Tungsten projection. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 09:43:12 -0700 |
| Commit: c740bed, github.com/apache/spark/pull/7720 |
| |
| [SPARK-9402][SQL] Remove CodegenFallback from Abs / FormatNumber. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 09:42:35 -0700 |
| Commit: 5a2330e, github.com/apache/spark/pull/7723 |
| |
| [SPARK-8919] [DOCUMENTATION, MLLIB] Added @since tags to mllib.recommendation |
| vinodkc <vinod.kc.in@gmail.com> |
| 2015-07-28 08:48:57 -0700 |
| Commit: 4af622c, github.com/apache/spark/pull/7325 |
| |
| [EC2] Cosmetic fix for usage of spark-ec2 --ebs-vol-num option |
| Kenichi Maehashi <webmaster@kenichimaehashi.com> |
| 2015-07-28 15:57:21 +0100 |
| Commit: ac8c549, github.com/apache/spark/pull/7632 |
| |
| [SPARK-9394][SQL] Handle parentheses in CodeFormatter. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-28 00:52:26 -0700 |
| Commit: 15724fa, github.com/apache/spark/pull/7712 |
| |
| Closes #6836 since Round has already been implemented. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 23:56:16 -0700 |
| Commit: fc3bd96 |
| |
| [SPARK-9335] [STREAMING] [TESTS] Make sure the test stream is deleted in KinesisBackedBlockRDDSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-27 23:34:29 -0700 |
| Commit: d93ab93, github.com/apache/spark/pull/7663 |
| |
| [MINOR] [SQL] Support mutable expression unit test with codegen projection |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-27 23:02:23 -0700 |
| Commit: 9c5612f, github.com/apache/spark/pull/7566 |
| |
| [SPARK-9373][SQL] Support StructType in Tungsten projection |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 22:51:15 -0700 |
| Commit: 60f08c7, github.com/apache/spark/pull/7689 |
| |
| [SPARK-8828] [SQL] Revert SPARK-5680 |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-27 22:47:31 -0700 |
| Commit: 63a492b, github.com/apache/spark/pull/7667 |
| |
| Fixed a test failure. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 22:04:54 -0700 |
| Commit: 3bc7055 |
| |
| [SPARK-9395][SQL] Create a SpecializedGetters interface to track all the specialized getters. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 21:41:15 -0700 |
| Commit: 84da879, github.com/apache/spark/pull/7713 |
| |
| [SPARK-8195] [SPARK-8196] [SQL] udf next_day last_day |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-27 21:08:56 -0700 |
| Commit: 2e7f99a, github.com/apache/spark/pull/6986 |
| |
| [SPARK-8882] [STREAMING] Add a new Receiver scheduling mechanism |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-27 17:59:43 -0700 |
| Commit: daa1964, github.com/apache/spark/pull/7276 |
| |
| [SPARK-9386] [SQL] Feature flag for metastore partition pruning |
| Michael Armbrust <michael@databricks.com> |
| 2015-07-27 17:32:34 -0700 |
| Commit: ce89ff4, github.com/apache/spark/pull/7703 |
| |
| [SPARK-9230] [ML] Support StringType features in RFormula |
| Eric Liang <ekl@databricks.com> |
| 2015-07-27 17:17:49 -0700 |
| Commit: 8ddfa52, github.com/apache/spark/pull/7574 |
| |
| [SPARK-9385] [PYSPARK] Enable PEP8 but disable installing pylint. |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-27 15:49:42 -0700 |
| Commit: dafe8d8, github.com/apache/spark/pull/7704 |
| |
| [SPARK-4352] [YARN] [WIP] Incorporate locality preferences in dynamic allocation requests |
| jerryshao <saisai.shao@intel.com> |
| 2015-07-27 15:46:35 -0700 |
| Commit: ab62595, github.com/apache/spark/pull/6394 |
| |
| [SPARK-9385] [HOT-FIX] [PYSPARK] Comment out Python style check |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-27 15:18:48 -0700 |
| Commit: 2104931, github.com/apache/spark/pull/7702 |
| |
| [SPARK-8988] [YARN] Make sure driver log links appear in secure clusteā¦ |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-07-27 15:16:46 -0700 |
| Commit: c1be9f3, github.com/apache/spark/pull/7624 |
| |
| [SPARK-9355][SQL] Remove InternalRow.get generic getter call in columnar cache code |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-27 13:40:50 -0700 |
| Commit: 3ab7525, github.com/apache/spark/pull/7673 |
| |
| [SPARK-9378] [SQL] Fixes test case "CTAS with serde" |
| Cheng Lian <lian@databricks.com> |
| 2015-07-27 13:28:03 -0700 |
| Commit: 8e7d2be, github.com/apache/spark/pull/7700 |
| |
| [SPARK-9349] [SQL] UDAF cleanup |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-27 13:26:57 -0700 |
| Commit: 55946e7, github.com/apache/spark/pull/7687 |
| |
| Closes #7690 since it has been merged into branch-1.4. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 13:21:04 -0700 |
| Commit: fa84e4a |
| |
| [HOTFIX] Disable pylint since it is failing master. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-27 12:25:34 -0700 |
| Commit: 85a50a6 |
| |
| [SPARK-9369][SQL] Support IntervalType in UnsafeRow |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-27 11:28:22 -0700 |
| Commit: 7543842, github.com/apache/spark/pull/7688 |
| |
| [SPARK-9351] [SQL] remove literals from grouping expressions in Aggregate |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-27 11:23:29 -0700 |
| Commit: dd9ae79, github.com/apache/spark/pull/7583 |
| |
| [SPARK-7423] [MLLIB] Modify ClassificationModel and Probabalistic model to use Vector.argmax |
| George Dittmar <georgedittmar@gmail.com> |
| 2015-07-27 11:16:33 -0700 |
| Commit: 1f7b3d9, github.com/apache/spark/pull/7670 |
| |
| [SPARK-9376] [SQL] use a seed in RandomDataGeneratorSuite |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-27 11:02:16 -0700 |
| Commit: e2f3816, github.com/apache/spark/pull/7691 |
| |
| [SPARK-9366] use task's stageAttemptId in TaskEnd event |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-07-27 12:54:08 -0500 |
| Commit: c0b7df6, github.com/apache/spark/pull/7681 |
| |
| [SPARK-9364] Fix array out of bounds and use-after-free bugs in UnsafeExternalSorter |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-27 09:34:49 -0700 |
| Commit: ecad9d4, github.com/apache/spark/pull/7680 |
| |
| Pregel example type fix |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-07-28 01:33:31 +0900 |
| Commit: 90006f3, github.com/apache/spark/pull/7695 |
| |
| [SPARK-4176] [SQL] Supports decimal types with precision > 18 in Parquet |
| Cheng Lian <lian@databricks.com> |
| 2015-07-27 23:29:40 +0800 |
| Commit: aa19c69, github.com/apache/spark/pull/7455 |
| |
| [SPARK-8405] [DOC] Add how to view logs on Web UI when yarn log aggregation is enabled |
| Carson Wang <carson.wang@intel.com> |
| 2015-07-27 08:02:40 -0500 |
| Commit: 6228381, github.com/apache/spark/pull/7463 |
| |
| [SPARK-7943] [SPARK-8105] [SPARK-8435] [SPARK-8714] [SPARK-8561] Fixes multi-database support |
| Cheng Lian <lian@databricks.com> |
| 2015-07-27 17:15:35 +0800 |
| Commit: 72981bc, github.com/apache/spark/pull/7623 |
| |
| [SPARK-9371][SQL] fix the support for special chars in column names for hive context |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-26 23:58:03 -0700 |
| Commit: 4ffd3a1, github.com/apache/spark/pull/7684 |
| |
| [SPARK-9368][SQL] Support get(ordinal, dataType) generic getter in UnsafeRow. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-26 23:01:04 -0700 |
| Commit: aa80c64, github.com/apache/spark/pull/7682 |
| |
| [SPARK-9306] [SQL] Don't use SortMergeJoin when joining on unsortable columns |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-26 22:13:37 -0700 |
| Commit: 945d8bc, github.com/apache/spark/pull/7645 |
| |
| [SPARK-8867][SQL] Support list / describe function usage |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-26 18:34:19 -0700 |
| Commit: 1efe97d, github.com/apache/spark/pull/7259 |
| |
| [SPARK-9095] [SQL] Removes the old Parquet support |
| Cheng Lian <lian@databricks.com> |
| 2015-07-26 16:49:19 -0700 |
| Commit: c025c3d, github.com/apache/spark/pull/7441 |
| |
| [SPARK-9326] Close lock file used for file downloads. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-07-26 13:35:16 -0700 |
| Commit: 6b2baec, github.com/apache/spark/pull/7650 |
| |
| [SPARK-9352] [SPARK-9353] Add tests for standalone scheduling code |
| Andrew Or <andrew@databricks.com> |
| 2015-07-26 13:03:13 -0700 |
| Commit: 1cf1976, github.com/apache/spark/pull/7668 |
| |
| [SPARK-9356][SQL]Remove the internal use of DecimalType.Unlimited |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-26 10:29:22 -0700 |
| Commit: fb5d43f, github.com/apache/spark/pull/7671 |
| |
| [SPARK-9354][SQL] Remove InternalRow.get generic getter call in Hive integration code. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-26 10:27:39 -0700 |
| Commit: 6c400b4, github.com/apache/spark/pull/7669 |
| |
| [SPARK-9337] [MLLIB] Add an ut for Word2Vec to verify the empty vocabulary check |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-26 14:02:20 +0100 |
| Commit: b79bf1d, github.com/apache/spark/pull/7660 |
| |
| [SPARK-9350][SQL] Introduce an InternalRow generic getter that requires a DataType |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-25 23:52:37 -0700 |
| Commit: 4a01bfc, github.com/apache/spark/pull/7666 |
| |
| [SPARK-8881] [SPARK-9260] Fix algorithm for scheduling executors on workers |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com> |
| 2015-07-25 22:56:25 -0700 |
| Commit: 41a7cdf, github.com/apache/spark/pull/7274 |
| |
| [SPARK-9348][SQL] Remove apply method on InternalRow. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-25 18:41:51 -0700 |
| Commit: b1f4b4a, github.com/apache/spark/pull/7665 |
| |
| [SPARK-9192][SQL] add initialization phase for nondeterministic expression |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-25 12:10:02 -0700 |
| Commit: 2c94d0f, github.com/apache/spark/pull/7535 |
| |
| [SPARK-9285] [SQL] Fixes Row/InternalRow conversion for HadoopFsRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-07-25 11:42:49 -0700 |
| Commit: e2ec018, github.com/apache/spark/pull/7649 |
| |
| [SPARK-9304] [BUILD] Improve backwards compatibility of SPARK-8401 |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-25 11:05:08 +0100 |
| Commit: c980e20, github.com/apache/spark/pull/7639 |
| |
| [SPARK-9334][SQL] Remove UnsafeRowConverter in favor of UnsafeProjection. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-25 01:37:41 -0700 |
| Commit: 215713e, github.com/apache/spark/pull/7658 |
| |
| [SPARK-9336][SQL] Remove extra JoinedRows |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-25 01:28:46 -0700 |
| Commit: f0ebab3, github.com/apache/spark/pull/7659 |
| |
| [Spark-8668][SQL] Adding expr to functions |
| JD <jd@csh.rit.edu>, Joseph Batchik <josephbatchik@gmail.com> |
| 2015-07-25 00:34:59 -0700 |
| Commit: 723db13, github.com/apache/spark/pull/7606 |
| |
| [HOTFIX] - Disable Kinesis tests due to rate limits |
| Patrick Wendell <patrick@databricks.com> |
| 2015-07-24 22:57:01 -0700 |
| Commit: 19bcd6a |
| |
| [SPARK-9331][SQL] Add a code formatter to auto-format generated code. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 19:35:24 -0700 |
| Commit: c84acd4, github.com/apache/spark/pull/7656 |
| |
| [SPARK-9330][SQL] Create specialized getStruct getter in InternalRow. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 19:29:01 -0700 |
| Commit: f99cb56, github.com/apache/spark/pull/7654 |
| |
| [SPARK-7045] [MLLIB] Avoid intermediate representation when creating model |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-24 14:58:07 -0700 |
| Commit: a400ab5, github.com/apache/spark/pull/5748 |
| |
| [SPARK-9067] [SQL] Close reader in NewHadoopRDD early if there is no more data |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-24 12:36:44 -0700 |
| Commit: 64135cb, github.com/apache/spark/pull/7424 |
| |
| [SPARK-9270] [PYSPARK] allow --name option in pyspark |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-07-24 11:56:55 -0700 |
| Commit: 9a11396, github.com/apache/spark/pull/7610 |
| |
| [SPARK-9261] [STREAMING] Avoid calling APIs that expose shaded classes. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-24 11:53:16 -0700 |
| Commit: 8399ba1, github.com/apache/spark/pull/7601 |
| |
| [SPARK-9295] Analysis should detect sorting on unsupported column types |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-24 11:34:23 -0700 |
| Commit: 6aceaf3, github.com/apache/spark/pull/7633 |
| |
| [SPARK-9222] [MLlib] Make class instantiation variables in DistributedLDAModel private[clustering] |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-24 10:56:48 -0700 |
| Commit: e253124, github.com/apache/spark/pull/7573 |
| |
| [SPARK-9292] Analysis should check that join conditions' data types are BooleanType |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-24 09:49:50 -0700 |
| Commit: c2b50d6, github.com/apache/spark/pull/7630 |
| |
| [SPARK-9305] Rename org.apache.spark.Row to Item. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 09:38:13 -0700 |
| Commit: c8d71a4, github.com/apache/spark/pull/7638 |
| |
| [SPARK-9285][SQL] Remove InternalRow's inheritance from Row. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 09:37:36 -0700 |
| Commit: 431ca39, github.com/apache/spark/pull/7626 |
| |
| [SPARK-9249] [SPARKR] local variable assigned but may not be used |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-24 09:10:11 -0700 |
| Commit: 3aec9f4, github.com/apache/spark/pull/7640 |
| |
| [SPARK-9250] Make change-scala-version more helpful w.r.t. valid Scala versions |
| FranƧois Garillot <francois@garillot.net> |
| 2015-07-24 17:09:33 +0100 |
| Commit: 428cde5, github.com/apache/spark/pull/7595 |
| |
| [SPARK-9238] [SQL] Remove two extra useless entries for bytesOfCodePointInUTF8 |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-24 08:34:50 -0700 |
| Commit: 846cf46, github.com/apache/spark/pull/7582 |
| |
| [SPARK-9069] [SQL] follow up |
| Davies Liu <davies@databricks.com> |
| 2015-07-24 08:24:13 -0700 |
| Commit: dfb18be, github.com/apache/spark/pull/7634 |
| |
| [SPARK-9236] [CORE] Make defaultPartitioner not reuse a parent RDD's partitioner if it has 0 partitions |
| FranƧois Garillot <francois@garillot.net> |
| 2015-07-24 15:41:13 +0100 |
| Commit: 6cd28cc, github.com/apache/spark/pull/7616 |
| |
| [SPARK-8756] [SQL] Keep cached information and avoid re-calculating footers in ParquetRelation2 |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-24 17:39:57 +0800 |
| Commit: 6a7e537, github.com/apache/spark/pull/7154 |
| |
| [build] Enable memory leak detection for Tungsten. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 01:47:13 -0700 |
| Commit: 8fe32b4, github.com/apache/spark/pull/7637 |
| |
| [SPARK-9200][SQL] Don't implicitly cast non-atomic types to string type. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-24 01:18:43 -0700 |
| Commit: cb8c241, github.com/apache/spark/pull/7636 |
| |
| [SPARK-9294][SQL] cleanup comments, code style, naming typo for the new aggregation |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-23 23:40:01 -0700 |
| Commit: 408e64b, github.com/apache/spark/pull/7619 |
| |
| [SPARK-8092] [ML] Allow OneVsRest Classifier feature and label column names to be configurable. |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-07-23 22:35:41 -0700 |
| Commit: d4d762f, github.com/apache/spark/pull/6631 |
| |
| [SPARK-9216] [STREAMING] Define KinesisBackedBlockRDDs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-23 20:06:54 -0700 |
| Commit: d249636, github.com/apache/spark/pull/7578 |
| |
| [SPARK-9122] [MLLIB] [PySpark] spark.mllib regression support batch predict |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-23 18:53:07 -0700 |
| Commit: 52de3ac, github.com/apache/spark/pull/7614 |
| |
| [SPARK-9069] [SPARK-9264] [SQL] remove unlimited precision support for DecimalType |
| Davies Liu <davies@databricks.com> |
| 2015-07-23 18:31:13 -0700 |
| Commit: 8a94eb2, github.com/apache/spark/pull/7605 |
| |
| [SPARK-9207] [SQL] Enables Parquet filter push-down by default |
| Cheng Lian <lian@databricks.com> |
| 2015-07-23 17:49:33 -0700 |
| Commit: bebe3f7, github.com/apache/spark/pull/7612 |
| |
| [SPARK-9286] [SQL] Methods in Unevaluable should be final and AlgebraicAggregate should extend Unevaluable. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-23 16:08:07 -0700 |
| Commit: b2f3aca, github.com/apache/spark/pull/7627 |
| |
| [SPARK-5447][SQL] Replace reference 'schema rdd' with DataFrame @rxin. |
| David Arroyo Cazorla <darroyo@stratio.com> |
| 2015-07-23 10:34:32 -0700 |
| Commit: 662d60d, github.com/apache/spark/pull/7618 |
| |
| [SPARK-9243] [Documentation] null -> zero in crosstab doc |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-23 10:32:11 -0700 |
| Commit: ecfb312, github.com/apache/spark/pull/7608 |
| |
| [SPARK-9183] confusing error message when looking up missing function in Spark SQL |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-23 10:31:12 -0700 |
| Commit: d2666a3, github.com/apache/spark/pull/7613 |
| |
| [Build][Minor] Fix building error & performance |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-23 10:28:20 -0700 |
| Commit: 19aeab5, github.com/apache/spark/pull/7611 |
| |
| [SPARK-9082] [SQL] [FOLLOW-UP] use `partition` in `PushPredicateThroughProject` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-23 09:37:53 -0700 |
| Commit: 52ef76d, github.com/apache/spark/pull/7607 |
| |
| [SPARK-9212] [CORE] upgrade Netty version to 4.0.29.Final |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-07-23 12:43:54 +0100 |
| Commit: 26ed22a, github.com/apache/spark/pull/7562 |
| |
| Revert "[SPARK-8579] [SQL] support arbitrary object in UnsafeRow" |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-23 01:51:34 -0700 |
| Commit: fb36397, github.com/apache/spark/pull/7591 |
| |
| [SPARK-9266] Prevent "managed memory leak detected" exception from masking original exception |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-23 00:43:26 -0700 |
| Commit: ac3ae0f, github.com/apache/spark/pull/7603 |
| |
| [SPARK-8695] [CORE] [MLLIB] TreeAggregation shouldn't be triggered when it doesn't save wall-clock time. |
| Perinkulam I. Ganesh <gip@us.ibm.com> |
| 2015-07-23 07:46:20 +0100 |
| Commit: b983d49, github.com/apache/spark/pull/7397 |
| |
| [SPARK-8935] [SQL] Implement code generation for all casts |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-22 23:44:08 -0700 |
| Commit: 6d0d8b4, github.com/apache/spark/pull/7365 |
| |
| [SPARK-7254] [MLLIB] Run PowerIterationClustering directly on graph |
| Liang-Chi Hsieh <viirya@appier.com>, Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-22 23:29:26 -0700 |
| Commit: 825ab1e, github.com/apache/spark/pull/6054 |
| |
| [SPARK-9268] [ML] Removed varargs annotation from Params.setDefault taking multiple params |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-22 23:27:25 -0700 |
| Commit: 410dd41, github.com/apache/spark/pull/7604 |
| |
| [SPARK-8364] [SPARKR] Add crosstab to SparkR DataFrames |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-22 21:40:23 -0700 |
| Commit: 2f5cbd8, github.com/apache/spark/pull/7318 |
| |
| [SPARK-9144] Remove DAGScheduler.runLocallyWithinThread and spark.localExecution.enabled |
| Josh Rosen <joshrosen@databricks.com>, Reynold Xin <rxin@databricks.com> |
| 2015-07-22 21:04:04 -0700 |
| Commit: b217230, github.com/apache/spark/pull/7585 |
| |
| [SPARK-9262][build] Treat Scala compiler warnings as errors |
| Reynold Xin <rxin@databricks.com>, Eric Liang <ekl@databricks.com> |
| 2015-07-22 21:02:19 -0700 |
| Commit: d71a13f, github.com/apache/spark/pull/7598 |
| |
| [SPARK-8484] [ML] Added TrainValidationSplit for hyper-parameter tuning. |
| martinzapletal <zapletal-martin@email.cz> |
| 2015-07-22 17:35:05 -0700 |
| Commit: a721ee5, github.com/apache/spark/pull/7337 |
| |
| [SPARK-9223] [PYSPARK] [MLLIB] Support model save/load in LDA |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-22 17:22:12 -0700 |
| Commit: 5307c9d, github.com/apache/spark/pull/7587 |
| |
| [SPARK-9180] fix spark-shell to accept --name option |
| Kenichi Maehashi <webmaster@kenichimaehashi.com> |
| 2015-07-22 16:15:44 -0700 |
| Commit: 430cd78, github.com/apache/spark/pull/7512 |
| |
| [SPARK-8975] [STREAMING] Adds a mechanism to send a new rate from the driver to the block generator |
| Iulian Dragos <jaguarul@gmail.com>, FranƧois Garillot <francois@garillot.net> |
| 2015-07-22 15:54:08 -0700 |
| Commit: 798dff7, github.com/apache/spark/pull/7471 |
| |
| [SPARK-9244] Increase some memory defaults |
| Matei Zaharia <matei@databricks.com> |
| 2015-07-22 15:28:09 -0700 |
| Commit: fe26584, github.com/apache/spark/pull/7586 |
| |
| [SPARK-8536] [MLLIB] Generalize OnlineLDAOptimizer to asymmetric document-topic Dirichlet priors |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-22 15:07:05 -0700 |
| Commit: 1aca9c1, github.com/apache/spark/pull/7575 |
| |
| [SPARK-4366] [SQL] [Follow-up] Fix SqlParser compiling warning. |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-22 13:28:09 -0700 |
| Commit: cf21d05, github.com/apache/spark/pull/7588 |
| |
| [SPARK-9224] [MLLIB] OnlineLDA Performance Improvements |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-22 13:06:01 -0700 |
| Commit: 8486cd8, github.com/apache/spark/pull/7454 |
| |
| [SPARK-9024] Unsafe HashJoin/HashOuterJoin/HashSemiJoin |
| Davies Liu <davies@databricks.com> |
| 2015-07-22 13:02:43 -0700 |
| Commit: e0b7ba5, github.com/apache/spark/pull/7480 |
| |
| [SPARK-9165] [SQL] codegen for CreateArray, CreateStruct and CreateNamedStruct |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-22 12:19:59 -0700 |
| Commit: 86f80e2, github.com/apache/spark/pull/7537 |
| |
| [SPARK-9082] [SQL] Filter using non-deterministic expressions should not be pushed down |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-22 11:45:51 -0700 |
| Commit: 7652095, github.com/apache/spark/pull/7446 |
| |
| [SPARK-9254] [BUILD] [HOTFIX] sbt-launch-lib.bash should support HTTP/HTTPS redirection |
| Cheng Lian <lian@databricks.com> |
| 2015-07-22 09:32:42 -0700 |
| Commit: b55a36b, github.com/apache/spark/pull/7597 |
| |
| [SPARK-4233] [SPARK-4367] [SPARK-3947] [SPARK-3056] [SQL] Aggregation Improvement |
| Yin Huai <yhuai@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-07-21 23:26:11 -0700 |
| Commit: c03299a, github.com/apache/spark/pull/7458 |
| |
| [SPARK-9232] [SQL] Duplicate code in JSONRelation |
| Andrew Or <andrew@databricks.com> |
| 2015-07-21 23:00:13 -0700 |
| Commit: f4785f5, github.com/apache/spark/pull/7576 |
| |
| [SPARK-9121] [SPARKR] Get rid of the warnings about `no visible global function definition` in SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-21 22:50:27 -0700 |
| Commit: 63f4bcc, github.com/apache/spark/pull/7567 |
| |
| [SPARK-9154][SQL] Rename formatString to format_string. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-21 19:14:07 -0700 |
| Commit: a4c83cb, github.com/apache/spark/pull/7579 |
| |
| [SPARK-9154] [SQL] codegen StringFormat |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-21 15:47:40 -0700 |
| Commit: d4c7a7a, github.com/apache/spark/pull/7571 |
| |
| [SPARK-9206] [SQL] Fix HiveContext classloading for GCS connector. |
| Dennis Huo <dhuo@google.com> |
| 2015-07-21 13:12:11 -0700 |
| Commit: c07838b, github.com/apache/spark/pull/7549 |
| |
| [SPARK-8906][SQL] Move all internal data source classes into execution.datasources. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-21 11:56:38 -0700 |
| Commit: 60c0ce1, github.com/apache/spark/pull/7565 |
| |
| [SPARK-8357] Fix unsafe memory leak on empty inputs in GeneratedAggregate |
| navis.ryu <navis@apache.org>, Josh Rosen <joshrosen@databricks.com> |
| 2015-07-21 11:52:52 -0700 |
| Commit: 9ba7c64, github.com/apache/spark/pull/6810. |
| |
| Revert "[SPARK-9154] [SQL] codegen StringFormat" |
| Michael Armbrust <michael@databricks.com> |
| 2015-07-21 11:18:39 -0700 |
| Commit: 87d890c, github.com/apache/spark/pull/7570 |
| |
| [SPARK-5989] [MLLIB] Model save/load for LDA |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-21 10:31:31 -0700 |
| Commit: 89db3c0, github.com/apache/spark/pull/6948 |
| |
| [SPARK-9154] [SQL] codegen StringFormat |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-21 09:58:16 -0700 |
| Commit: 7f072c3, github.com/apache/spark/pull/7546 |
| |
| [SPARK-5423] [CORE] Register a TaskCompletionListener to make sure release all resources |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-21 09:55:42 -0700 |
| Commit: d45355e, github.com/apache/spark/pull/7529 |
| |
| [SPARK-4598] [WEBUI] Task table pagination for the Stage page |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-21 09:54:39 -0700 |
| Commit: 4f7f1ee, github.com/apache/spark/pull/7399 |
| |
| [SPARK-7171] Added a method to retrieve metrics sources in TaskContext |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-07-21 09:53:33 -0700 |
| Commit: 3195491, github.com/apache/spark/pull/5805 |
| |
| [SPARK-9128] [CORE] Get outerclasses and objects with only one method calling in ClosureCleaner |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-21 09:52:27 -0700 |
| Commit: 9a4fd87, github.com/apache/spark/pull/7459 |
| |
| [SPARK-9036] [CORE] SparkListenerExecutorMetricsUpdate messages not included in JsonProtocol |
| Ben <benjaminpiering@gmail.com> |
| 2015-07-21 09:51:13 -0700 |
| Commit: f67da43, github.com/apache/spark/pull/7555 |
| |
| [SPARK-9193] Avoid assigning tasks to "lost" executor(s) |
| Grace <jie.huang@intel.com> |
| 2015-07-21 11:35:49 -0500 |
| Commit: 6592a60, github.com/apache/spark/pull/7528 |
| |
| [SPARK-8915] [DOCUMENTATION, MLLIB] Added @since tags to mllib.classification |
| petz2000 <petz2000@gmail.com> |
| 2015-07-21 08:50:43 -0700 |
| Commit: df4ddb3, github.com/apache/spark/pull/7371 |
| |
| [SPARK-9081] [SPARK-9168] [SQL] nanvl & dropna/fillna supporting nan as well |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-21 08:25:50 -0700 |
| Commit: be5c5d3, github.com/apache/spark/pull/7523 |
| |
| [SPARK-8401] [BUILD] Scala version switching build enhancements |
| Michael Allman <michael@videoamp.com> |
| 2015-07-21 11:14:31 +0100 |
| Commit: f5b6dc5, github.com/apache/spark/pull/6832 |
| |
| [SPARK-8875] Remove BlockStoreShuffleFetcher class |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-07-21 01:12:51 -0700 |
| Commit: 6364735, github.com/apache/spark/pull/7268 |
| |
| [SPARK-9173][SQL]UnionPushDown should also support Intersect and Except |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-21 00:56:57 -0700 |
| Commit: ae23059, github.com/apache/spark/pull/7540 |
| |
| [SPARK-8230][SQL] Add array/map size method |
| Pedro Rodriguez <ski.rodriguez@gmail.com>, Pedro Rodriguez <prodriguez@trulia.com> |
| 2015-07-21 00:53:20 -0700 |
| Commit: 560c658, github.com/apache/spark/pull/7462 |
| |
| [SPARK-8255] [SPARK-8256] [SQL] Add regex_extract/regex_replace |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-21 00:48:07 -0700 |
| Commit: 8c8f0ef, github.com/apache/spark/pull/7468 |
| |
| [SPARK-9100] [SQL] Adds DataFrame reader/writer shortcut methods for ORC |
| Cheng Lian <lian@databricks.com> |
| 2015-07-21 15:08:44 +0800 |
| Commit: d38c502, github.com/apache/spark/pull/7444 |
| |
| [SPARK-9161][SQL] codegen FormatNumber |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 23:33:07 -0700 |
| Commit: 1ddd0f2, github.com/apache/spark/pull/7545 |
| |
| [SPARK-9179] [BUILD] Use default primary author if unspecified |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-07-20 23:31:08 -0700 |
| Commit: 228ab65, github.com/apache/spark/pull/7558 |
| |
| [SPARK-9023] [SQL] Followup for #7456 (Efficiency improvements for UnsafeRows in Exchange) |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-20 23:28:35 -0700 |
| Commit: 48f8fd4, github.com/apache/spark/pull/7551 |
| |
| [SPARK-9208][SQL] Remove variant of DataFrame string functions that accept column names. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-20 22:48:13 -0700 |
| Commit: 67570be, github.com/apache/spark/pull/7556 |
| |
| [SPARK-9157] [SQL] codegen substring |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 22:43:30 -0700 |
| Commit: 560b355, github.com/apache/spark/pull/7534 |
| |
| [SPARK-8797] [SPARK-9146] [SPARK-9145] [SPARK-9147] Support NaN ordering and equality comparisons in Spark SQL |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-20 22:38:05 -0700 |
| Commit: c032b0b, github.com/apache/spark/pull/7194 |
| |
| [SPARK-9204][ML] Add default params test for linearyregression suite |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-20 22:15:10 -0700 |
| Commit: 4d97be9, github.com/apache/spark/pull/7553 |
| |
| [SPARK-9132][SPARK-9163][SQL] codegen conv |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 22:08:12 -0700 |
| Commit: a3c7a3c, github.com/apache/spark/pull/7552 |
| |
| [SPARK-9201] [ML] Initial integration of MLlib + SparkR using RFormula |
| Eric Liang <ekl@databricks.com> |
| 2015-07-20 20:49:38 -0700 |
| Commit: 1cbdd89, github.com/apache/spark/pull/7483 |
| |
| [SPARK-9052] [SPARKR] Fix comments after curly braces |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-21 11:38:22 +0900 |
| Commit: 2bdf991, github.com/apache/spark/pull/7440 |
| |
| [SPARK-9164] [SQL] codegen hex/unhex |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 19:17:59 -0700 |
| Commit: 936a96c, github.com/apache/spark/pull/7548 |
| |
| [SPARK-9142][SQL] Removing unnecessary self types in expressions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-20 18:23:51 -0700 |
| Commit: e90543e, github.com/apache/spark/pull/7550 |
| |
| [SPARK-9156][SQL] codegen StringSplit |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 18:21:05 -0700 |
| Commit: 6853ac7, github.com/apache/spark/pull/7547 |
| |
| [SPARK-9178][SQL] Add an empty string constant to UTF8String |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 18:16:49 -0700 |
| Commit: 047ccc8, github.com/apache/spark/pull/7509 |
| |
| [SPARK-9187] [WEBUI] Timeline view may show negative value for running tasks |
| Carson Wang <carson.wang@intel.com> |
| 2015-07-20 18:08:59 -0700 |
| Commit: 66bb800, github.com/apache/spark/pull/7526 |
| |
| [SPARK-9175] [MLLIB] BLAS.gemm fails to update matrix C when alpha==0 and beta!=1 |
| Meihua Wu <meihuawu@umich.edu> |
| 2015-07-20 17:03:46 -0700 |
| Commit: ff3c72d, github.com/apache/spark/pull/7503 |
| |
| [SPARK-9198] [MLLIB] [PYTHON] Fixed typo in pyspark sparsevector doc tests |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-20 16:49:55 -0700 |
| Commit: a5d0581, github.com/apache/spark/pull/7541 |
| |
| [SPARK-8125] [SQL] Accelerates Parquet schema merging and partition discovery |
| Cheng Lian <lian@databricks.com> |
| 2015-07-20 16:42:43 -0700 |
| Commit: a1064df, github.com/apache/spark/pull/7396 |
| |
| [SPARK-9160][SQL] codegen encode, decode |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 16:11:56 -0700 |
| Commit: dac7dbf, github.com/apache/spark/pull/7543 |
| |
| [SPARK-9159][SQL] codegen ascii, base64, unbase64 |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 15:32:46 -0700 |
| Commit: c9db8ea, github.com/apache/spark/pull/7542 |
| |
| [SPARK-9155][SQL] codegen StringSpace |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 15:23:28 -0700 |
| Commit: 4863c11, github.com/apache/spark/pull/7531 |
| |
| [SPARK-6910] [SQL] Support for pushing predicates down to metastore for partition pruning |
| Cheolsoo Park <cheolsoop@netflix.com>, Cheng Lian <lian@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-07-20 15:12:06 -0700 |
| Commit: dde0e12, github.com/apache/spark/pull/7492 |
| |
| [SPARK-9114] [SQL] [PySpark] convert returned object from UDF into internal type |
| Davies Liu <davies@databricks.com> |
| 2015-07-20 12:14:47 -0700 |
| Commit: 9f913c4, github.com/apache/spark/pull/7450 |
| |
| [SPARK-9101] [PySpark] Add missing NullType |
| Mateusz BuÅkiewicz <mateusz.buskiewicz@getbase.com> |
| 2015-07-20 12:00:48 -0700 |
| Commit: 02181fb, github.com/apache/spark/pull/7499 |
| |
| [SPARK-8103][core] DAGScheduler should not submit multiple concurrent attempts for a stage |
| Imran Rashid <irashid@cloudera.com>, Kay Ousterhout <kayousterhout@gmail.com>, Imran Rashid <squito@users.noreply.github.com> |
| 2015-07-20 10:28:32 -0700 |
| Commit: 80e2568, github.com/apache/spark/pull/6750 |
| |
| [SQL] Remove space from DataFrame Scala/Java API. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-20 09:43:25 -0700 |
| Commit: c6fe9b4, github.com/apache/spark/pull/7530 |
| |
| [SPARK-9186][SQL] make deterministic describing the tree rather than the expression |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-20 09:42:18 -0700 |
| Commit: 04db58a, github.com/apache/spark/pull/7525 |
| |
| [SPARK-9177][SQL] Reuse of calendar object in WeekOfYear |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 09:41:25 -0700 |
| Commit: a15ecd0, github.com/apache/spark/pull/7516 |
| |
| [SPARK-9153][SQL] codegen StringLPad/StringRPad |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-20 09:35:45 -0700 |
| Commit: 5112b7f, github.com/apache/spark/pull/7527 |
| |
| [SPARK-8996] [MLLIB] [PYSPARK] Python API for Kolmogorov-Smirnov Test |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-20 09:00:01 -0700 |
| Commit: d0b4e93, github.com/apache/spark/pull/7430 |
| |
| [SPARK-7422] [MLLIB] Add argmax to Vector, SparseVector |
| George Dittmar <georgedittmar@gmail.com>, George <dittmar@Georges-MacBook-Pro.local>, dittmarg <george.dittmar@webtrends.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-07-20 08:55:37 -0700 |
| Commit: 3f7de7d, github.com/apache/spark/pull/6112 |
| |
| [SPARK-9023] [SQL] Efficiency improvements for UnsafeRows in Exchange |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-19 23:41:28 -0700 |
| Commit: 79ec072, github.com/apache/spark/pull/7456 |
| |
| [SQL][DOC] Minor document fix in HadoopFsRelationProvider |
| Jacky Li <lee.unreal@gmail.com>, Jacky Li <jackylk@users.noreply.github.com> |
| 2015-07-19 23:19:17 -0700 |
| Commit: 972d890, github.com/apache/spark/pull/7524 |
| |
| Code review feedback for the previous patch. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-19 22:45:56 -0700 |
| Commit: 5bdf16d |
| |
| [SPARK-9185][SQL] improve code gen for mutable states to support complex initialization |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-19 22:42:44 -0700 |
| Commit: 930253e, github.com/apache/spark/pull/7521 |
| |
| [SPARK-9172][SQL] Make DecimalPrecision support for Intersect and Except |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-19 20:53:18 -0700 |
| Commit: d743bec, github.com/apache/spark/pull/7511 |
| |
| [SPARK-9030] [STREAMING] [HOTFIX] Make sure that no attempts to create Kinesis streams is made when no enabled |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-19 20:34:30 -0700 |
| Commit: 93eb2ac, github.com/apache/spark/pull/7519 |
| |
| [SPARK-8241][SQL] string function: concat_ws. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-19 16:48:47 -0700 |
| Commit: 163e3f1, github.com/apache/spark/pull/7504 |
| |
| [SPARK-8638] [SQL] Window Function Performance Improvements - Cleanup |
| Herman van Hovell <hvanhovell@questtec.nl> |
| 2015-07-19 16:29:50 -0700 |
| Commit: 7a81245, github.com/apache/spark/pull/7513 |
| |
| [SPARK-9021] [PYSPARK] Change RDD.aggregate() to do reduce(mapPartitions()) instead of mapPartitions.fold() |
| Nicholas Hwang <moogling@gmail.com> |
| 2015-07-19 10:30:28 -0700 |
| Commit: a803ac3, github.com/apache/spark/pull/7378 |
| |
| [HOTFIX] [SQL] Fixes compilation error introduced by PR #7506 |
| Cheng Lian <lian@databricks.com> |
| 2015-07-19 18:58:19 +0800 |
| Commit: 34ed82b, github.com/apache/spark/pull/7510 |
| |
| [SPARK-9179] [BUILD] Allows committers to specify primary author of the PR to be merged |
| Cheng Lian <lian@databricks.com> |
| 2015-07-19 17:37:25 +0800 |
| Commit: bc24289, github.com/apache/spark/pull/7508 |
| |
| [SQL] Make date/time functions more consistent with other database systems. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-19 01:17:22 -0700 |
| Commit: 3427937, github.com/apache/spark/pull/7506 |
| |
| [SPARK-8199][SQL] follow up; revert change in test |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-19 01:16:01 -0700 |
| Commit: a53d13f, github.com/apache/spark/pull/7505 |
| |
| [SPARK-9094] [PARENT] Increased io.dropwizard.metrics from 3.1.0 to 3.1.2 |
| Carl Anders DĆ¼vel <c.a.duevel@gmail.com> |
| 2015-07-19 09:14:55 +0100 |
| Commit: 344d156, github.com/apache/spark/pull/7493 |
| |
| [SPARK-9166][SQL][PYSPARK] Capture and hide IllegalArgumentException in Python API |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-19 00:32:56 -0700 |
| Commit: 9b644c4, github.com/apache/spark/pull/7497 |
| |
| Closes #6775 since it is subsumbed by other patches. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 23:47:40 -0700 |
| Commit: 89d1358 |
| |
| [SPARK-8638] [SQL] Window Function Performance Improvements |
| Herman van Hovell <hvanhovell@questtec.nl> |
| 2015-07-18 23:44:38 -0700 |
| Commit: a9a0d0c, github.com/apache/spark/pull/7057 |
| |
| Fixed test cases. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 22:50:34 -0700 |
| Commit: 04c1b49 |
| |
| [SPARK-8199][SPARK-8184][SPARK-8183][SPARK-8182][SPARK-8181][SPARK-8180][SPARK-8179][SPARK-8177][SPARK-8178][SPARK-9115][SQL] date functions |
| Tarek Auel <tarek.auel@googlemail.com>, Tarek Auel <tarek.auel@gmail.com> |
| 2015-07-18 22:48:05 -0700 |
| Commit: 83b682b, github.com/apache/spark/pull/6981 |
| |
| [SPARK-8443][SQL] Split GenerateMutableProjection Codegen due to JVM Code Size Limits |
| Forest Fang <forest.fang@outlook.com> |
| 2015-07-18 21:05:44 -0700 |
| Commit: 6cb6096, github.com/apache/spark/pull/7076 |
| |
| [SPARK-8278] Remove non-streaming JSON reader. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 20:27:55 -0700 |
| Commit: 45d798c, github.com/apache/spark/pull/7501 |
| |
| [SPARK-9150][SQL] Create CodegenFallback and Unevaluable trait |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 18:18:19 -0700 |
| Commit: 9914b1b, github.com/apache/spark/pull/7487 |
| |
| [SPARK-9174][SQL] Add documentation for all public SQLConfs. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 15:29:38 -0700 |
| Commit: e16a19a, github.com/apache/spark/pull/7500 |
| |
| [SPARK-8240][SQL] string function: concat |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 14:07:56 -0700 |
| Commit: 6e1e2eb, github.com/apache/spark/pull/7486 |
| |
| [SPARK-9055][SQL] WidenTypes should also support Intersect and Except |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-18 12:57:53 -0700 |
| Commit: 3d2134f, github.com/apache/spark/pull/7491 |
| |
| Closes #6122 |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 12:25:04 -0700 |
| Commit: cdc36ee |
| |
| [SPARK-9151][SQL] Implement code generation for Abs |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-18 12:11:37 -0700 |
| Commit: 225de8d, github.com/apache/spark/pull/7498 |
| |
| [SPARK-9171][SQL] add and improve tests for nondeterministic expressions |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-18 11:58:53 -0700 |
| Commit: 86c50bf, github.com/apache/spark/pull/7496 |
| |
| [SPARK-9167][SQL] use UTC Calendar in `stringToDate` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-18 11:25:16 -0700 |
| Commit: 692378c, github.com/apache/spark/pull/7488 |
| |
| [SPARK-9142][SQL] remove more self type in catalyst |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-18 11:13:49 -0700 |
| Commit: 1b4ff05, github.com/apache/spark/pull/7495 |
| |
| [SPARK-9143] [SQL] Add planner rule for automatically inserting Unsafe <-> Safe row format converters |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-18 11:08:18 -0700 |
| Commit: b8aec6c, github.com/apache/spark/pull/7482 |
| |
| [SPARK-9169][SQL] Improve unit test coverage for null expressions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-18 11:06:46 -0700 |
| Commit: fba3f5b, github.com/apache/spark/pull/7490 |
| |
| [MLLIB] [DOC] Seed fix in mllib naive bayes example |
| PaweÅ Kozikowski <mupakoz@gmail.com> |
| 2015-07-18 10:12:48 -0700 |
| Commit: b9ef7ac, github.com/apache/spark/pull/7477 |
| |
| [SPARK-9118] [ML] Implement IntArrayParam in mllib |
| Rekha Joshi <rekhajoshm@gmail.com>, Joshi <rekhajoshm@gmail.com> |
| 2015-07-17 20:02:05 -0700 |
| Commit: 1017908, github.com/apache/spark/pull/7481 |
| |
| [SPARK-7879] [MLLIB] KMeans API for spark.ml Pipelines |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-17 18:30:04 -0700 |
| Commit: 34a889d, github.com/apache/spark/pull/6756 |
| |
| [SPARK-8280][SPARK-8281][SQL]Handle NaN, null and Infinity in math |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-17 17:33:19 -0700 |
| Commit: 529a2c2, github.com/apache/spark/pull/7451 |
| |
| [SPARK-7026] [SQL] fix left semi join with equi key and non-equi condition |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-17 16:45:46 -0700 |
| Commit: 1707238, github.com/apache/spark/pull/5643 |
| |
| [SPARK-9030] [STREAMING] Add Kinesis.createStream unit tests that actual sends data |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-17 16:43:18 -0700 |
| Commit: b13ef77, github.com/apache/spark/pull/7413 |
| |
| [SPARK-9117] [SQL] fix BooleanSimplification in case-insensitive |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-17 16:28:24 -0700 |
| Commit: bd903ee, github.com/apache/spark/pull/7452 |
| |
| [SPARK-9113] [SQL] enable analysis check code for self join |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-17 16:03:33 -0700 |
| Commit: fd6b310, github.com/apache/spark/pull/7449 |
| |
| [SPARK-9080][SQL] add isNaN predicate expression |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-17 15:49:31 -0700 |
| Commit: 15fc2ff, github.com/apache/spark/pull/7464 |
| |
| [SPARK-9142] [SQL] Removing unnecessary self types in Catalyst. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-17 15:02:13 -0700 |
| Commit: b2aa490, github.com/apache/spark/pull/7479 |
| |
| [SPARK-8593] [CORE] Sort app attempts by start time. |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-07-17 22:47:28 +0100 |
| Commit: 42d8a01, github.com/apache/spark/pull/7253 |
| |
| [SPARK-7127] [MLLIB] Adding broadcast of model before prediction for ensembles |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-07-17 14:10:16 -0700 |
| Commit: 8b8be1f, github.com/apache/spark/pull/6300 |
| |
| [SPARK-8792] [ML] Add Python API for PCA transformer |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-17 14:08:06 -0700 |
| Commit: 830666f, github.com/apache/spark/pull/7190 |
| |
| [SPARK-9090] [ML] Fix definition of residual in LinearRegressionSummary, EnsembleTestHelper, and SquaredError |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-17 14:00:53 -0700 |
| Commit: 6da1069, github.com/apache/spark/pull/7435 |
| |
| [SPARK-5681] [STREAMING] Move 'stopReceivers' to the event loop to resolve the race condition |
| zsxwing <zsxwing@gmail.com>, Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-17 14:00:31 -0700 |
| Commit: ad0954f, github.com/apache/spark/pull/4467 |
| |
| [SPARK-9136] [SQL] fix several bugs in DateTimeUtils.stringToTimestamp |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-17 13:57:31 -0700 |
| Commit: 074085d, github.com/apache/spark/pull/7473 |
| |
| [SPARK-8600] [ML] Naive Bayes API for spark.ml Pipelines |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-17 13:55:17 -0700 |
| Commit: 9974642, github.com/apache/spark/pull/7284 |
| |
| [SPARK-9062] [ML] Change output type of Tokenizer to Array(String, true) |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-17 13:43:19 -0700 |
| Commit: 806c579, github.com/apache/spark/pull/7414 |
| |
| [SPARK-9138] [MLLIB] fix Vectors.dense |
| Davies Liu <davies@databricks.com> |
| 2015-07-17 12:43:58 -0700 |
| Commit: f9a82a8, github.com/apache/spark/pull/7476 |
| |
| [SPARK-9109] [GRAPHX] Keep the cached edge in the graph |
| tien-dungle <tien-dung.le@realimpactanalytics.com> |
| 2015-07-17 12:11:32 -0700 |
| Commit: 587c315, github.com/apache/spark/pull/7469 |
| |
| [SPARK-8945][SQL] Add add and subtract expressions for IntervalType |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-17 09:38:08 -0700 |
| Commit: eba6a1a, github.com/apache/spark/pull/7398 |
| |
| [SPARK-8209[SQL]Add function conv |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-17 09:32:27 -0700 |
| Commit: 305e77c, github.com/apache/spark/pull/6872 |
| |
| [SPARK-9130][SQL] throw exception when check equality between external and internal row |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-17 09:31:13 -0700 |
| Commit: 59d24c2, github.com/apache/spark/pull/7460 |
| |
| [MINOR] [ML] fix wrong annotation of RFormula.formula |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-17 09:00:41 -0700 |
| Commit: 441e072, github.com/apache/spark/pull/7470 |
| |
| [SPARK-8851] [YARN] In Client mode, make sure the client logs in and updates tokens |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-07-17 09:38:08 -0500 |
| Commit: c043a3e, github.com/apache/spark/pull/7394 |
| |
| [SPARK-9022] [SQL] Generated projections for UnsafeRow |
| Davies Liu <davies@databricks.com> |
| 2015-07-17 01:27:14 -0700 |
| Commit: ec8973d, github.com/apache/spark/pull/7437 |
| |
| [SPARK-9093] [SPARKR] Fix single-quotes strings in SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-17 17:00:50 +0900 |
| Commit: 5a3c1ad, github.com/apache/spark/pull/7439 |
| |
| [SPARK-9102] [SQL] Improve project collapse with nondeterministic expressions |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-17 00:59:15 -0700 |
| Commit: 3f6d28a, github.com/apache/spark/pull/7445 |
| |
| Added inline comment for the canEqual PR by @cloud-fan. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-16 23:13:06 -0700 |
| Commit: 111c055 |
| |
| [SPARK-9126] [MLLIB] do not assert on time taken by Thread.sleep() |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-16 23:02:06 -0700 |
| Commit: 358e7bf, github.com/apache/spark/pull/7457 |
| |
| [SPARK-7131] [ML] Copy Decision Tree, Random Forest impl to spark.ml |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-16 22:26:59 -0700 |
| Commit: 322d286, github.com/apache/spark/pull/7294 |
| |
| [SPARK-8899] [SQL] remove duplicated equals method for Row |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-16 21:41:36 -0700 |
| Commit: f893955, github.com/apache/spark/pull/7291 |
| |
| [SPARK-8857][SPARK-8859][Core]Add an internal flag to Accumulable and send internal accumulator updates to the driver via heartbeats |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-16 21:09:09 -0700 |
| Commit: 812b63b, github.com/apache/spark/pull/7448 |
| |
| [SPARK-8119] HeartbeatReceiver should replace executors, not kill |
| Andrew Or <andrew@databricks.com> |
| 2015-07-16 19:39:54 -0700 |
| Commit: 96aa334, github.com/apache/spark/pull/7107 |
| |
| [SPARK-6284] [MESOS] Add mesos role, principal and secret |
| Timothy Chen <tnachen@gmail.com> |
| 2015-07-16 19:36:45 -0700 |
| Commit: d86bbb4, github.com/apache/spark/pull/4960 |
| |
| [SPARK-8646] PySpark does not run on YARN if master not provided in command line |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-07-16 19:31:14 -0700 |
| Commit: 49351c7, github.com/apache/spark/pull/7438 |
| |
| [SPARK-8644] Include call site in SparkException stack traces thrown by job failures |
| Aaron Davidson <aaron@databricks.com> |
| 2015-07-16 18:14:45 -0700 |
| Commit: 57e9b13, github.com/apache/spark/pull/7028 |
| |
| [SPARK-6304] [STREAMING] Fix checkpointing doesn't retain driver port issue. |
| jerryshao <saisai.shao@intel.com>, Saisai Shao <saisai.shao@intel.com> |
| 2015-07-16 16:55:46 -0700 |
| Commit: 031d7d4, github.com/apache/spark/pull/5060 |
| |
| [SPARK-9085][SQL] Remove LeafNode, UnaryNode, BinaryNode from TreeNode. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-16 13:58:39 -0700 |
| Commit: fec10f0, github.com/apache/spark/pull/7434 |
| |
| [SPARK-6941] [SQL] Provide a better error message to when inserting into RDD based table |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-16 10:52:09 -0700 |
| Commit: 43dac2c, github.com/apache/spark/pull/7342 |
| |
| [SPARK-9015] [BUILD] Clean project import in scala ide |
| Jan Prach <jendap@gmail.com> |
| 2015-07-16 18:42:41 +0100 |
| Commit: b536d5d, github.com/apache/spark/pull/7375 |
| |
| [SPARK-8995] [SQL] cast date strings like '2015-01-01 12:15:31' to date |
| Tarek Auel <tarek.auel@googlemail.com>, Tarek Auel <tarek.auel@gmail.com> |
| 2015-07-16 08:26:39 -0700 |
| Commit: 4ea6480, github.com/apache/spark/pull/7353 |
| |
| [SPARK-8893] Add runtime checks against non-positive number of partitions |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2015-07-16 08:16:54 +0100 |
| Commit: 0115516, github.com/apache/spark/pull/7285 |
| |
| [SPARK-8807] [SPARKR] Add between operator in SparkR |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-15 23:36:57 -0700 |
| Commit: 0a79533, github.com/apache/spark/pull/7356 |
| |
| [SPARK-8972] [SQL] Incorrect result for rollup |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-15 23:35:27 -0700 |
| Commit: e272123, github.com/apache/spark/pull/7343 |
| |
| [SPARK-9068][SQL] refactor the implicit type cast code |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-15 22:27:39 -0700 |
| Commit: ba33096, github.com/apache/spark/pull/7420 |
| |
| [SPARK-8245][SQL] FormatNumber/Length Support for Expression |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-15 21:47:21 -0700 |
| Commit: 42dea3a, github.com/apache/spark/pull/7034 |
| |
| [SPARK-9060] [SQL] Revert SPARK-8359, SPARK-8800, and SPARK-8677 |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-15 21:08:30 -0700 |
| Commit: 9c64a75, github.com/apache/spark/pull/7426 |
| |
| [SPARK-9018] [MLLIB] add stopwatches |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-15 21:02:42 -0700 |
| Commit: 73d92b0, github.com/apache/spark/pull/7415 |
| |
| [SPARK-8774] [ML] Add R model formula with basic support as a transformer |
| Eric Liang <ekl@databricks.com> |
| 2015-07-15 20:33:06 -0700 |
| Commit: 6960a79, github.com/apache/spark/pull/7381 |
| |
| [SPARK-9086][SQL] Remove BinaryNode from TreeNode. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-15 17:50:11 -0700 |
| Commit: b064519, github.com/apache/spark/pull/7433 |
| |
| [SPARK-9071][SQL] MonotonicallyIncreasingID and SparkPartitionID should be marked as nondeterministic. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-15 14:52:02 -0700 |
| Commit: affbe32, github.com/apache/spark/pull/7428 |
| |
| [SPARK-8974] Catch exceptions in allocation schedule task. |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-07-15 22:31:10 +0100 |
| Commit: 674eb2a, github.com/apache/spark/pull/7352 |
| |
| [SPARK-6602][Core]Replace Akka Serialization with Spark Serializer |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-15 14:02:23 -0700 |
| Commit: b9a922e, github.com/apache/spark/pull/7159 |
| |
| [SPARK-9005] [MLLIB] Fix RegressionMetrics computation of explainedVariance |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-15 13:32:25 -0700 |
| Commit: 536533c, github.com/apache/spark/pull/7361 |
| |
| SPARK-9070 JavaDataFrameSuite teardown NPEs if setup failed |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-07-15 12:15:35 -0700 |
| Commit: ec9b621, github.com/apache/spark/pull/7425 |
| |
| [SPARK-7555] [DOCS] Add doc for elastic net in ml-guide and mllib-guide |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-07-15 12:10:53 -0700 |
| Commit: 303c120, github.com/apache/spark/pull/6504 |
| |
| [Minor][SQL] Allow spaces in the beginning and ending of string for Interval |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-15 10:46:22 -0700 |
| Commit: 9716a727, github.com/apache/spark/pull/7390 |
| |
| [SPARK-8221][SQL]Add pmod function |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-15 10:43:38 -0700 |
| Commit: a938527, github.com/apache/spark/pull/6783 |
| |
| [SPARK-9020][SQL] Support mutable state in code gen expressions |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-15 10:31:39 -0700 |
| Commit: fa4ec36, github.com/apache/spark/pull/7392 |
| |
| [SPARK-8840] [SPARKR] Add float coercion on SparkR |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-15 09:48:33 -0700 |
| Commit: 6f69025, github.com/apache/spark/pull/7280 |
| |
| [SPARK-8706] [PYSPARK] [PROJECT INFRA] Add pylint checks to PySpark |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-15 08:25:53 -0700 |
| Commit: 20bb10f, github.com/apache/spark/pull/7241 |
| |
| [SPARK-9012] [WEBUI] Escape Accumulators in the task table |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-15 17:30:57 +0900 |
| Commit: adb33d3, github.com/apache/spark/pull/7369 |
| |
| [HOTFIX][SQL] Unit test breaking. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-15 00:12:21 -0700 |
| Commit: 14935d8 |
| |
| [SPARK-8997] [MLLIB] Performance improvements in LocalPrefixSpan |
| Feynman Liang <fliang@databricks.com>, Feynman Liang <feynman.liang@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-07-14 23:50:57 -0700 |
| Commit: 1bb8acc, github.com/apache/spark/pull/7360 |
| |
| [SPARK-8279][SQL]Add math function round |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-14 23:30:41 -0700 |
| Commit: f0e1297, github.com/apache/spark/pull/6938 |
| |
| [SPARK-8018] [MLLIB] KMeans should accept initial cluster centers as param |
| FlytxtRnD <meethu.mathew@flytxt.com> |
| 2015-07-14 23:29:02 -0700 |
| Commit: 3f6296f, github.com/apache/spark/pull/6737 |
| |
| [SPARK-6259] [MLLIB] Python API for LDA |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-14 23:27:42 -0700 |
| Commit: 4692769, github.com/apache/spark/pull/6791 |
| |
| Revert SPARK-6910 and SPARK-9027 |
| Michael Armbrust <michael@databricks.com> |
| 2015-07-14 22:57:39 -0700 |
| Commit: c6b1a9e, github.com/apache/spark/pull/7409 |
| |
| [SPARK-8993][SQL] More comprehensive type checking in expressions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-14 22:52:53 -0700 |
| Commit: f23a721, github.com/apache/spark/pull/7348 |
| |
| [SPARK-8808] [SPARKR] Fix assignments in SparkR. |
| Sun Rui <rui.sun@intel.com> |
| 2015-07-14 22:21:01 -0700 |
| Commit: f650a00, github.com/apache/spark/pull/7395 |
| |
| [HOTFIX] Adding new names to known contributors |
| Patrick Wendell <patrick@databricks.com> |
| 2015-07-14 21:44:47 -0700 |
| Commit: 5572fd0 |
| |
| [SPARK-5523] [CORE] [STREAMING] Add a cache for hostname in TaskMetrics to decrease the memory usage and GC overhead |
| jerryshao <saisai.shao@intel.com>, Saisai Shao <saisai.shao@intel.com> |
| 2015-07-14 19:54:02 -0700 |
| Commit: bb870e7, github.com/apache/spark/pull/5064 |
| |
| [SPARK-8820] [STREAMING] Add a configuration to set checkpoint dir. |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-07-14 19:20:49 -0700 |
| Commit: f957796, github.com/apache/spark/pull/7218 |
| |
| [SPARK-9050] [SQL] Remove unused newOrdering argument from Exchange (cleanup after SPARK-8317) |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-14 18:55:34 -0700 |
| Commit: cc57d70, github.com/apache/spark/pull/7407 |
| |
| [SPARK-9045] Fix Scala 2.11 build break in UnsafeExternalRowSorter |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-14 17:21:48 -0700 |
| Commit: e965a79, github.com/apache/spark/pull/7405 |
| |
| [SPARK-8962] Add Scalastyle rule to ban direct use of Class.forName; fix existing uses |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-14 16:08:17 -0700 |
| Commit: 11e5c37, github.com/apache/spark/pull/7350 |
| |
| [SPARK-4362] [MLLIB] Make prediction probability available in NaiveBayesModel |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-14 22:44:54 +0100 |
| Commit: 740b034, github.com/apache/spark/pull/7376 |
| |
| [SPARK-8800] [SQL] Fix inaccurate precision/scale of Decimal division operation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-14 14:19:27 -0700 |
| Commit: 4b5cfc9, github.com/apache/spark/pull/7212 |
| |
| [SPARK-4072] [CORE] Display Streaming blocks in Streaming UI |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-14 13:58:36 -0700 |
| Commit: fb1d06f, github.com/apache/spark/pull/6672 |
| |
| [SPARK-8718] [GRAPHX] Improve EdgePartition2D for non perfect square number of partitions |
| Andrew Ray <ray.andrew@gmail.com> |
| 2015-07-14 13:14:47 -0700 |
| Commit: 0a4071e, github.com/apache/spark/pull/7104 |
| |
| [SPARK-9031] Merge BlockObjectWriter and DiskBlockObject writer to remove abstract class |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-14 12:56:17 -0700 |
| Commit: d267c28, github.com/apache/spark/pull/7391 |
| |
| [SPARK-8911] Fix local mode endless heartbeats |
| Andrew Or <andrew@databricks.com> |
| 2015-07-14 12:47:11 -0700 |
| Commit: 8fb3a65, github.com/apache/spark/pull/7382 |
| |
| [SPARK-8933] [BUILD] Provide a --force flag to build/mvn that always uses downloaded maven |
| Brennon York <brennon.york@capitalone.com> |
| 2015-07-14 11:43:26 -0700 |
| Commit: c4e98ff, github.com/apache/spark/pull/7374 |
| |
| [SPARK-9027] [SQL] Generalize metastore predicate pushdown |
| Michael Armbrust <michael@databricks.com> |
| 2015-07-14 11:22:09 -0700 |
| Commit: 37f2d96, github.com/apache/spark/pull/7386 |
| |
| [SPARK-9029] [SQL] shortcut CaseKeyWhen if key is null |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-14 10:20:15 -0700 |
| Commit: 59d820a, github.com/apache/spark/pull/7389 |
| |
| [SPARK-6851] [SQL] function least/greatest follow up |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-14 01:09:33 -0700 |
| Commit: 257236c, github.com/apache/spark/pull/7387 |
| |
| [SPARK-9010] [DOCUMENTATION] Improve the Spark Configuration document about `spark.kryoserializer.buffer` |
| zhaishidan <zhaishidan@haizhi.com> |
| 2015-07-14 08:54:30 +0100 |
| Commit: c1feebd, github.com/apache/spark/pull/7393 |
| |
| [SPARK-9001] Fixing errors in javadocs that lead to failed build/sbt doc |
| Joseph Gonzalez <joseph.e.gonzalez@gmail.com> |
| 2015-07-14 00:32:29 -0700 |
| Commit: 20c1434, github.com/apache/spark/pull/7354 |
| |
| [SPARK-6910] [SQL] Support for pushing predicates down to metastore for partition pruning |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-07-13 19:45:10 -0700 |
| Commit: 408b384, github.com/apache/spark/pull/7216 |
| |
| [SPARK-8743] [STREAMING] Deregister Codahale metrics for streaming when StreamingContext is closed |
| Neelesh Srinivas Salian <nsalian@cloudera.com> |
| 2015-07-13 15:46:51 -0700 |
| Commit: b7bcbe2, github.com/apache/spark/pull/7362 |
| |
| [SPARK-8533] [STREAMING] Upgrade Flume to 1.6.0 |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-07-13 14:15:31 -0700 |
| Commit: 0aed38e, github.com/apache/spark/pull/6939 |
| |
| [SPARK-8636] [SQL] Fix equalNullSafe comparison |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-07-13 12:51:33 -0700 |
| Commit: 4c797f2, github.com/apache/spark/pull/7040 |
| |
| [SPARK-8991] [ML] Update SharedParamsCodeGen's Generated Documentation |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-07-13 12:03:39 -0700 |
| Commit: 714fc55, github.com/apache/spark/pull/7367 |
| |
| [SPARK-8954] [BUILD] Remove unneeded deb repository from Dockerfile to fix build error in docker. |
| yongtang <yongtang@users.noreply.github.com> |
| 2015-07-13 12:01:23 -0700 |
| Commit: 5c41691, github.com/apache/spark/pull/7346 |
| |
| Revert "[SPARK-8706] [PYSPARK] [PROJECT INFRA] Add pylint checks to PySpark" |
| Davies Liu <davies.liu@gmail.com> |
| 2015-07-13 11:30:36 -0700 |
| Commit: 79c3582 |
| |
| [SPARK-8950] [WEBUI] Correct the calculation of SchedulerDelay in StagePage |
| Carson Wang <carson.wang@intel.com> |
| 2015-07-13 11:20:04 -0700 |
| Commit: 5ca26fb, github.com/apache/spark/pull/7319 |
| |
| [SPARK-8706] [PYSPARK] [PROJECT INFRA] Add pylint checks to PySpark |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-13 09:47:53 -0700 |
| Commit: 9b62e93, github.com/apache/spark/pull/7241 |
| |
| [SPARK-6797] [SPARKR] Add support for YARN cluster mode. |
| Sun Rui <rui.sun@intel.com> |
| 2015-07-13 08:21:47 -0700 |
| Commit: 7f487c8, github.com/apache/spark/pull/6743 |
| |
| [SPARK-8596] Add module for rstudio link to spark |
| Vincent D. Warmerdam <vincentwarmerdam@gmail.com> |
| 2015-07-13 08:15:54 -0700 |
| Commit: a5bc803, github.com/apache/spark/pull/7366 |
| |
| [SPARK-8944][SQL] Support casting between IntervalType and StringType |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-13 00:49:39 -0700 |
| Commit: 6b89943, github.com/apache/spark/pull/7355 |
| |
| [SPARK-8203] [SPARK-8204] [SQL] conditional function: least/greatest |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-13 00:14:32 -0700 |
| Commit: 92540d2, github.com/apache/spark/pull/6851 |
| |
| [SPARK-9006] [PYSPARK] fix microsecond loss in Python 3 |
| Davies Liu <davies@databricks.com> |
| 2015-07-12 20:25:06 -0700 |
| Commit: 20b4743, github.com/apache/spark/pull/7363 |
| |
| [SPARK-8880] Fix confusing Stage.attemptId member variable |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-07-12 20:45:09 -0400 |
| Commit: 3009088, github.com/apache/spark/pull/7275 |
| |
| [SPARK-8970][SQL] remove unnecessary abstraction for ExtractValue |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-10 23:25:11 -0700 |
| Commit: c472eb1, github.com/apache/spark/pull/7339 |
| |
| [SPARK-8994] [ML] tiny cleanups to Params, Pipeline |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-10 21:25:09 -0700 |
| Commit: 0c5207c, github.com/apache/spark/pull/7349 |
| |
| [SPARK-6487] [MLLIB] Add sequential pattern mining algorithm PrefixSpan to Spark MLlib |
| zhangjiajin <zhangjiajin@huawei.com>, zhang jiajin <zhangjiajin@huawei.com> |
| 2015-07-10 21:11:46 -0700 |
| Commit: 7f6be1f, github.com/apache/spark/pull/7258 |
| |
| [SPARK-8598] [MLLIB] Implementation of 1-sample, two-sided, Kolmogorov Smirnov Test for RDDs |
| jose.cambronero <jose.cambronero@cloudera.com> |
| 2015-07-10 20:55:45 -0700 |
| Commit: 9c50757, github.com/apache/spark/pull/6994 |
| |
| [SPARK-7735] [PYSPARK] Raise Exception on non-zero exit from pipe commands |
| Scott Taylor <github@megatron.me.uk> |
| 2015-07-10 19:29:32 -0700 |
| Commit: 6e1c7e2, github.com/apache/spark/pull/6262 |
| |
| [SPARK-8961] [SQL] Makes BaseWriterContainer.outputWriterForRow accepts InternalRow instead of Row |
| Cheng Lian <lian@databricks.com> |
| 2015-07-10 18:15:36 -0700 |
| Commit: 3363088, github.com/apache/spark/pull/7331 |
| |
| add inline comment for python tests |
| Davies Liu <davies.liu@gmail.com> |
| 2015-07-10 17:44:21 -0700 |
| Commit: b6fc0ad |
| |
| [SPARK-8990] [SQL] SPARK-8990 DataFrameReader.parquet() should respect user specified options |
| Cheng Lian <lian@databricks.com> |
| 2015-07-10 16:49:45 -0700 |
| Commit: 857e325, github.com/apache/spark/pull/7347 |
| |
| [SPARK-7078] [SPARK-7079] Binary processing sort for Spark SQL |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-10 16:44:51 -0700 |
| Commit: fb8807c, github.com/apache/spark/pull/6444 |
| |
| [SPARK-8923] [DOCUMENTATION, MLLIB] Add @since tags to mllib.fpm |
| rahulpalamuttam <rahulpalamut@gmail.com> |
| 2015-07-10 16:07:31 -0700 |
| Commit: 0772026, github.com/apache/spark/pull/7341 |
| |
| [HOTFIX] fix flaky test in PySpark SQL |
| Davies Liu <davies@databricks.com> |
| 2015-07-10 13:05:23 -0700 |
| Commit: 05ac023, github.com/apache/spark/pull/7344 |
| |
| [SPARK-8675] Executors created by LocalBackend won't get the same classpath as other executor backends |
| Min Zhou <coderplay@gmail.com> |
| 2015-07-10 09:52:40 -0700 |
| Commit: c185f3a, github.com/apache/spark/pull/7091 |
| |
| [CORE] [MINOR] change the log level to info |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-10 09:50:46 -0700 |
| Commit: db6d57f, github.com/apache/spark/pull/7340 |
| |
| [SPARK-8958] Dynamic allocation: change cached timeout to infinity |
| Andrew Or <andrew@databricks.com> |
| 2015-07-10 09:48:17 -0700 |
| Commit: 5dd45bd, github.com/apache/spark/pull/7329 |
| |
| [SPARK-7944] [SPARK-8013] Remove most of the Spark REPL fork for Scala 2.11 |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-07-10 16:22:49 +0100 |
| Commit: 11e22b7, github.com/apache/spark/pull/6903 |
| |
| [SPARK-7977] [BUILD] Disallowing println |
| Jonathan Alter <jonalter@users.noreply.github.com> |
| 2015-07-10 11:34:01 +0100 |
| Commit: e14b545, github.com/apache/spark/pull/7093 |
| |
| [DOCS] Added important updateStateByKey details |
| Michael Vogiatzis <michaelvogiatzis@gmail.com> |
| 2015-07-09 19:53:23 -0700 |
| Commit: d538919, github.com/apache/spark/pull/7229 |
| |
| [SPARK-8839] [SQL] ThriftServer2 will remove session and execution no matter it's finished or not. |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-07-09 19:31:31 -0700 |
| Commit: 1903641, github.com/apache/spark/pull/7239 |
| |
| [SPARK-8913] [ML] Simplify LogisticRegression suite to use Vector Vector comparision |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-09 19:08:33 -0700 |
| Commit: 2727304, github.com/apache/spark/pull/7335 |
| |
| [SPARK-8852] [FLUME] Trim dependencies in flume assembly. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-09 18:23:06 -0700 |
| Commit: 0e78e40, github.com/apache/spark/pull/7247 |
| |
| [SPARK-8959] [SQL] [HOTFIX] Removes parquet-thrift and libthrift dependencies |
| Cheng Lian <lian@databricks.com> |
| 2015-07-09 17:09:16 -0700 |
| Commit: 2d45571, github.com/apache/spark/pull/7330 |
| |
| [SPARK-8538] [SPARK-8539] [ML] Linear Regression Training and Testing Results |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-09 16:21:21 -0700 |
| Commit: a0cc3e5, github.com/apache/spark/pull/7099 |
| |
| [SPARK-8963][ML] cleanup tests in linear regression suite |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-09 15:49:30 -0700 |
| Commit: e29ce31, github.com/apache/spark/pull/7327 |
| |
| Closes #6837 Closes #7321 Closes #2634 Closes #4963 Closes #2137 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-09 15:14:14 -0700 |
| Commit: 6916533 |
| |
| [SPARK-8865] [STREAMING] FIX BUG: check key in kafka params |
| guowei2 <guowei@growingio.com> |
| 2015-07-09 15:01:53 -0700 |
| Commit: 8977003, github.com/apache/spark/pull/7254 |
| |
| [SPARK-7902] [SPARK-6289] [SPARK-8685] [SQL] [PYSPARK] Refactor of serialization for Python DataFrame |
| Davies Liu <davies@databricks.com> |
| 2015-07-09 14:43:38 -0700 |
| Commit: c9e2ef5, github.com/apache/spark/pull/7301 |
| |
| [SPARK-8389] [STREAMING] [PYSPARK] Expose KafkaRDDs offsetRange in Python |
| jerryshao <saisai.shao@intel.com> |
| 2015-07-09 13:54:44 -0700 |
| Commit: 3ccebf3, github.com/apache/spark/pull/7185 |
| |
| [SPARK-8701] [STREAMING] [WEBUI] Add input metadata in the batch page |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-09 13:48:29 -0700 |
| Commit: 1f6b0b1, github.com/apache/spark/pull/7081 |
| |
| [SPARK-6287] [MESOS] Add dynamic allocation to the coarse-grained Mesos scheduler |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-07-09 13:26:46 -0700 |
| Commit: c483059, github.com/apache/spark/pull/4984 |
| |
| [SPARK-2017] [UI] Stage page hangs with many tasks |
| Andrew Or <andrew@databricks.com> |
| 2015-07-09 13:25:11 -0700 |
| Commit: ebdf585, github.com/apache/spark/pull/7296 |
| |
| [SPARK-7419] [STREAMING] [TESTS] Fix CheckpointSuite.recovery with file input stream |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-09 13:22:17 -0700 |
| Commit: 88bf430, github.com/apache/spark/pull/7323 |
| |
| [SPARK-8953] SPARK_EXECUTOR_CORES is not read in SparkSubmit |
| xutingjun <xutingjun@huawei.com> |
| 2015-07-09 13:21:10 -0700 |
| Commit: 930fe95, github.com/apache/spark/pull/7322 |
| |
| [MINOR] [STREAMING] Fix log statements in ReceiverSupervisorImpl |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-07-09 13:19:36 -0700 |
| Commit: 7ce3b81, github.com/apache/spark/pull/7328 |
| |
| [SPARK-8247] [SPARK-8249] [SPARK-8252] [SPARK-8254] [SPARK-8257] [SPARK-8258] [SPARK-8259] [SPARK-8261] [SPARK-8262] [SPARK-8253] [SPARK-8260] [SPARK-8267] [SQL] Add String Expressions |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-09 11:11:34 -0700 |
| Commit: 0b0b9ce, github.com/apache/spark/pull/6762 |
| |
| [SPARK-8703] [ML] Add CountVectorizer as a ml transformer to convert document to words count vector |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-09 10:26:38 -0700 |
| Commit: 0cd84c8, github.com/apache/spark/pull/7084 |
| |
| [SPARK-8863] [EC2] Check aws access key from aws credentials if there is no boto config |
| JPark <JPark@JPark.me> |
| 2015-07-09 10:23:36 -0700 |
| Commit: c59e268, github.com/apache/spark/pull/7252 |
| |
| [SPARK-8938][SQL] Implement toString for Interval data type |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-09 10:04:42 -0700 |
| Commit: f6c0bd5, github.com/apache/spark/pull/7315 |
| |
| [SPARK-8926][SQL] Code review followup. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-09 10:01:33 -0700 |
| Commit: a870a82, github.com/apache/spark/pull/7313 |
| |
| [SPARK-8948][SQL] Remove ExtractValueWithOrdinal abstract class |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-09 10:01:01 -0700 |
| Commit: e204d22, github.com/apache/spark/pull/7316 |
| |
| [SPARK-8940] [SPARKR] Don't overwrite given schema in createDataFrame |
| Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-09 09:57:12 -0700 |
| Commit: 59cc389, github.com/apache/spark/pull/7311 |
| |
| [SPARK-8830] [SQL] native levenshtein distance |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-09 09:22:24 -0700 |
| Commit: a1964e9, github.com/apache/spark/pull/7236 |
| |
| [SPARK-8931] [SQL] Fallback to interpreted evaluation if failed to compile in codegen |
| Davies Liu <davies@databricks.com> |
| 2015-07-09 09:20:16 -0700 |
| Commit: 23448a9, github.com/apache/spark/pull/7309 |
| |
| [SPARK-6266] [MLLIB] PySpark SparseVector missing doc for size, indices, values |
| lewuathe <lewuathe@me.com> |
| 2015-07-09 08:16:26 -0700 |
| Commit: f88b125, github.com/apache/spark/pull/7290 |
| |
| [SPARK-8942][SQL] use double not decimal when cast double and float to timestamp |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-09 00:26:25 -0700 |
| Commit: 09cb0d9, github.com/apache/spark/pull/7312 |
| |
| [SPARK-8928] [SQL] Makes CatalystSchemaConverter sticking to 1.4.x- when handling Parquet LISTs in compatible mode |
| Weizhong Lin <linweizhong@huawei.com> |
| 2015-07-08 22:18:39 -0700 |
| Commit: 851e247, github.com/apache/spark/pull/7314 |
| |
| Revert "[SPARK-8928] [SQL] Makes CatalystSchemaConverter sticking to 1.4.x- when handling Parquet LISTs in compatible mode" |
| Cheng Lian <lian@databricks.com> |
| 2015-07-08 22:14:38 -0700 |
| Commit: c056484 |
| |
| [SPARK-8928] [SQL] Makes CatalystSchemaConverter sticking to 1.4.x- when handling Parquet LISTs in compatible mode |
| Weizhong Lin <linweizhong@huawei.com> |
| 2015-07-08 22:09:12 -0700 |
| Commit: 3dab0da, github.com/apache/spark/pull/7304 |
| |
| Closes #7310. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-08 22:08:50 -0700 |
| Commit: a240bf3 |
| |
| [SPARK-8926][SQL] Good errors for ExpectsInputType expressions |
| Michael Armbrust <michael@databricks.com> |
| 2015-07-08 22:05:58 -0700 |
| Commit: 768907e, github.com/apache/spark/pull/7303 |
| |
| [SPARK-8937] [TEST] A setting `spark.unsafe.exceptionOnMemoryLeak ` is missing in ScalaTest config. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-07-09 13:28:17 +0900 |
| Commit: aba5784, github.com/apache/spark/pull/7308 |
| |
| [SPARK-8910] Fix MiMa flaky due to port contention issue |
| Andrew Or <andrew@databricks.com> |
| 2015-07-08 20:29:08 -0700 |
| Commit: 47ef423, github.com/apache/spark/pull/7300 |
| |
| [SPARK-8932] Support copy() for UnsafeRows that do not use ObjectPools |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-08 20:28:05 -0700 |
| Commit: b55499a, github.com/apache/spark/pull/7306 |
| |
| [SPARK-8866][SQL] use 1us precision for timestamp type |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-08 20:20:17 -0700 |
| Commit: a290814, github.com/apache/spark/pull/7283 |
| |
| [SPARK-8927] [DOCS] Format wrong for some config descriptions |
| Jonathan Alter <jonalter@users.noreply.github.com> |
| 2015-07-09 03:28:51 +0100 |
| Commit: 28fa01e, github.com/apache/spark/pull/7292 |
| |
| [SPARK-8450] [SQL] [PYSARK] cleanup type converter for Python DataFrame |
| Davies Liu <davies@databricks.com> |
| 2015-07-08 18:22:53 -0700 |
| Commit: 74d8d3d, github.com/apache/spark/pull/7106 |
| |
| [SPARK-8914][SQL] Remove RDDApi |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-07-08 18:09:39 -0700 |
| Commit: 2a4f88b, github.com/apache/spark/pull/7302 |
| |
| [SPARK-5016] [MLLIB] Distribute GMM mixture components to executors |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-08 16:32:00 -0700 |
| Commit: f472b8c, github.com/apache/spark/pull/7166 |
| |
| [SPARK-8877] [MLLIB] Public API for association rule generation |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-08 16:27:11 -0700 |
| Commit: 8c32b2e, github.com/apache/spark/pull/7271 |
| |
| [SPARK-8068] [MLLIB] Add confusionMatrix method at class MulticlassMetrics in pyspark/mllib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-08 16:21:28 -0700 |
| Commit: 381cb16, github.com/apache/spark/pull/7286 |
| |
| [SPARK-6123] [SPARK-6775] [SPARK-6776] [SQL] Refactors Parquet read path for interoperability and backwards-compatibility |
| Cheng Lian <lian@databricks.com> |
| 2015-07-08 15:51:01 -0700 |
| Commit: 4ffc27c, github.com/apache/spark/pull/7231 |
| |
| [SPARK-8902] Correctly print hostname in error |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2015-07-09 07:34:02 +0900 |
| Commit: 5687f76, github.com/apache/spark/pull/7288 |
| |
| [SPARK-8700][ML] Disable feature scaling in Logistic Regression |
| DB Tsai <dbt@netflix.com> |
| 2015-07-08 15:21:58 -0700 |
| Commit: 5722193, github.com/apache/spark/pull/7080 |
| |
| [SPARK-8908] [SQL] Add () to distinct definition in dataframe |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-07-08 15:18:24 -0700 |
| Commit: 00b265f, github.com/apache/spark/pull/7298 |
| |
| [SPARK-8909][Documentation] Change the scala example in sql-programmiā¦ |
| Alok Singh <āsinghal@us.ibm.comā> |
| 2015-07-08 14:51:18 -0700 |
| Commit: 8f3cd93, github.com/apache/spark/pull/7299 |
| |
| [SPARK-8457] [ML] NGram Documentation |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-08 14:49:52 -0700 |
| Commit: c5532e2, github.com/apache/spark/pull/7244 |
| |
| [SPARK-8783] [SQL] CTAS with WITH clause does not work |
| Keuntae Park <sirpkt@apache.org> |
| 2015-07-08 14:29:52 -0700 |
| Commit: f031543, github.com/apache/spark/pull/7180 |
| |
| [SPARK-7785] [MLLIB] [PYSPARK] Add __str__ and __repr__ to Matrices |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-08 13:19:27 -0700 |
| Commit: 2b40365, github.com/apache/spark/pull/6342 |
| |
| [SPARK-8900] [SPARKR] Fix sparkPackages in init documentation |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-07-08 12:39:32 -0700 |
| Commit: 374c8a8, github.com/apache/spark/pull/7293 |
| |
| [SPARK-8657] [YARN] Fail to upload resource to viewfs |
| Tao Li <litao@sogou-inc.com> |
| 2015-07-08 19:02:24 +0100 |
| Commit: 26d9b6b, github.com/apache/spark/pull/7125 |
| |
| [SPARK-8888][SQL] Use java.util.HashMap in DynamicPartitionWriterContainer. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-08 10:56:31 -0700 |
| Commit: f61c989, github.com/apache/spark/pull/7282 |
| |
| [SPARK-8753][SQL] Create an IntervalType data type |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-08 10:51:32 -0700 |
| Commit: 0ba98c0, github.com/apache/spark/pull/7226 |
| |
| [SPARK-5707] [SQL] fix serialization of generated projection |
| Davies Liu <davies@databricks.com> |
| 2015-07-08 10:43:00 -0700 |
| Commit: 74335b3, github.com/apache/spark/pull/7272 |
| |
| [SPARK-6912] [SQL] Throw an AnalysisException when unsupported Java Map<K,V> types used in Hive UDF |
| Takeshi YAMAMURO <linguin.m.s@gmail.com> |
| 2015-07-08 10:33:27 -0700 |
| Commit: 3e831a2, github.com/apache/spark/pull/7257 |
| |
| [SPARK-8785] [SQL] Improve Parquet schema merging |
| Liang-Chi Hsieh <viirya@gmail.com>, Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-08 10:09:50 -0700 |
| Commit: 6722aca, github.com/apache/spark/pull/7182 |
| |
| [SPARK-8894] [SPARKR] [DOC] Example code errors in SparkR documentation. |
| Sun Rui <rui.sun@intel.com> |
| 2015-07-08 09:48:16 -0700 |
| Commit: bf02e37, github.com/apache/spark/pull/7287 |
| |
| [SPARK-8872] [MLLIB] added verification results from R for FPGrowthSuite |
| Kashif Rasul <kashif.rasul@gmail.com> |
| 2015-07-08 08:44:58 -0700 |
| Commit: 3bb2177, github.com/apache/spark/pull/7269 |
| |
| [SPARK-7050] [BUILD] Fix Python Kafka test assembly jar not found issue under Maven build |
| jerryshao <saisai.shao@intel.com> |
| 2015-07-08 12:23:32 +0100 |
| Commit: 8a9d9cc, github.com/apache/spark/pull/5632 |
| |
| [SPARK-8883][SQL]Remove the OverrideFunctionRegistry |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-08 00:10:24 -0700 |
| Commit: 351a36d, github.com/apache/spark/pull/7260 |
| |
| [SPARK-8886][Documentation]python Style update |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-07-07 22:35:39 -0700 |
| Commit: 08192a1, github.com/apache/spark/pull/7281 |
| |
| [SPARK-8879][SQL] Remove EmptyRow class. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-07 22:12:46 -0700 |
| Commit: 61c3cf7, github.com/apache/spark/pull/7277 |
| |
| [SPARK-8878][SQL] Improve unit test coverage for bitwise expressions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-07 19:12:40 -0700 |
| Commit: 5d603df, github.com/apache/spark/pull/7273 |
| |
| [SPARK-8868] SqlSerializer2 can go into infinite loop when row consists only of NullType columns |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-07 18:36:35 -0700 |
| Commit: 68a4a16, github.com/apache/spark/pull/7262 |
| |
| [SPARK-7190] [SPARK-8804] [SPARK-7815] [SQL] unsafe UTF8String |
| Davies Liu <davies@databricks.com> |
| 2015-07-07 17:57:17 -0700 |
| Commit: 4ca9093, github.com/apache/spark/pull/7197 |
| |
| [SPARK-8876][SQL] Remove InternalRow type alias in expressions package. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-07 17:40:14 -0700 |
| Commit: 770ff10, github.com/apache/spark/pull/7270 |
| |
| [SPARK-8794] [SQL] Make PrunedScan work for Sample |
| Liang-Chi Hsieh <viirya@appier.com>, Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-07 15:49:22 -0700 |
| Commit: da56c4e, github.com/apache/spark/pull/7228 |
| |
| [SPARK-8845] [ML] ML use of Breeze optimization: use adjustedValue instead of value |
| DB Tsai <dbt@netflix.com> |
| 2015-07-07 15:46:44 -0700 |
| Commit: 3bf20c2, github.com/apache/spark/pull/7245 |
| |
| [SPARK-8704] [ML] [PySpark] Add missing methods in StandardScaler |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-07 12:35:40 -0700 |
| Commit: 35d781e, github.com/apache/spark/pull/7086 |
| |
| [SPARK-8559] [MLLIB] Support Association Rule Generation |
| Feynman Liang <fliang@databricks.com> |
| 2015-07-07 11:34:30 -0700 |
| Commit: 3336c7b, github.com/apache/spark/pull/7005 |
| |
| [SPARK-8821] [EC2] Switched to binary mode for file reading |
| Simon Hafner <hafnersimon@gmail.com> |
| 2015-07-07 09:42:59 -0700 |
| Commit: 70beb80, github.com/apache/spark/pull/7215 |
| |
| [SPARK-8823] [MLLIB] [PYSPARK] Optimizations for SparseVector dot products |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-07 08:59:52 -0700 |
| Commit: 738c107, github.com/apache/spark/pull/7222 |
| |
| [SPARK-8711] [ML] Add additional methods to PySpark ML tree models |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-07 08:58:08 -0700 |
| Commit: 1dbc4a1, github.com/apache/spark/pull/7095 |
| |
| [SPARK-8570] [MLLIB] [DOCS] Improve MLlib Local Matrix Documentation. |
| Mike Dusenberry <mwdusenb@us.ibm.com> |
| 2015-07-07 08:24:52 -0700 |
| Commit: 0a63d7a, github.com/apache/spark/pull/6958 |
| |
| [SPARK-8788] [ML] Add Java unit test for PCA transformer |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-07 08:19:17 -0700 |
| Commit: d73bc08, github.com/apache/spark/pull/7184 |
| |
| [SPARK-6731] [CORE] Addendum: Upgrade Apache commons-math3 to 3.4.1 |
| Sean Owen <sowen@cloudera.com> |
| 2015-07-07 08:09:56 -0700 |
| Commit: dcbd85b, github.com/apache/spark/pull/7261 |
| |
| [HOTFIX] Rename release-profile to release |
| Patrick Wendell <patrick@databricks.com> |
| 2015-07-06 22:14:24 -0700 |
| Commit: 1cb2629 |
| |
| [SPARK-8759][SQL] add default eval to binary and unary expression according to default behavior of nullable |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-06 22:13:50 -0700 |
| Commit: c46aaf4, github.com/apache/spark/pull/7157 |
| |
| [SPARK-5562] [MLLIB] LDA should handle empty document. |
| Alok Singh <singhal@Aloks-MacBook-Pro.local>, Alok Singh <singhal@aloks-mbp.usca.ibm.com>, Alok Singh <āsinghal@us.ibm.comā> |
| 2015-07-06 21:53:55 -0700 |
| Commit: 6718c1e, github.com/apache/spark/pull/7064 |
| |
| [SPARK-6747] [SQL] Throw an AnalysisException when unsupported Java list types used in Hive UDF |
| Takeshi YAMAMURO <linguin.m.s@gmail.com> |
| 2015-07-06 19:44:31 -0700 |
| Commit: 1821fc1, github.com/apache/spark/pull/7248 |
| |
| Revert "[SPARK-8781] Fix variables in published pom.xml are not resolved" |
| Andrew Or <andrew@databricks.com> |
| 2015-07-06 19:27:04 -0700 |
| Commit: 929dfa2 |
| |
| [SPARK-8819] Fix build for maven 3.3.x |
| Andrew Or <andrew@databricks.com> |
| 2015-07-06 19:22:30 -0700 |
| Commit: 9eae5fa, github.com/apache/spark/pull/7219 |
| |
| [SPARK-8463][SQL] Use DriverRegistry to load jdbc driver at writing path |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-06 17:16:44 -0700 |
| Commit: d4d6d31, github.com/apache/spark/pull/6900 |
| |
| [SPARK-8072] [SQL] Better AnalysisException for writing DataFrame with identically named columns |
| animesh <animesh@apache.spark> |
| 2015-07-06 16:39:49 -0700 |
| Commit: 09a0641, github.com/apache/spark/pull/7013 |
| |
| [SPARK-8588] [SQL] Regression test |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-06 16:26:31 -0700 |
| Commit: 7b467cc, github.com/apache/spark/pull/7103 |
| |
| [SPARK-8765] [MLLIB] Fix PySpark PowerIterationClustering test issue |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-06 16:15:12 -0700 |
| Commit: 0effe18, github.com/apache/spark/pull/7177 |
| |
| Revert "[SPARK-7212] [MLLIB] Add sequence learning flag" |
| Xiangrui Meng <meng@databricks.com> |
| 2015-07-06 16:11:22 -0700 |
| Commit: 96c5eee, github.com/apache/spark/pull/7240 |
| |
| [SPARK-6707] [CORE] [MESOS] Mesos Scheduler should allow the user to specify constraints based on slave attributes |
| Ankur Chauhan <achauhan@brightcove.com> |
| 2015-07-06 16:04:57 -0700 |
| Commit: 1165b17, github.com/apache/spark/pull/5563 |
| |
| [SPARK-8656] [WEBUI] Fix the webUI and JSON API number is not synced |
| Wisely Chen <wiselychen@appier.com> |
| 2015-07-06 16:04:01 -0700 |
| Commit: 9ff2033, github.com/apache/spark/pull/7038 |
| |
| [MINOR] [SQL] remove unused code in Exchange |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-06 15:54:43 -0700 |
| Commit: 132e7fc, github.com/apache/spark/pull/7234 |
| |
| [SPARK-4485] [SQL] 1) Add broadcast hash outer join, (2) Fix SparkPlanTest |
| kai <kaizeng@eecs.berkeley.edu> |
| 2015-07-06 14:33:30 -0700 |
| Commit: 2471c0b, github.com/apache/spark/pull/7162 |
| |
| [SPARK-8784] [SQL] Add Python API for hex and unhex |
| Davies Liu <davies@databricks.com> |
| 2015-07-06 13:31:31 -0700 |
| Commit: 37e4d92, github.com/apache/spark/pull/7223 |
| |
| Small update in the readme file |
| Dirceu Semighini Filho <dirceu.semighini@gmail.com> |
| 2015-07-06 13:28:07 -0700 |
| Commit: 57c72fc, github.com/apache/spark/pull/7242 |
| |
| [SPARK-8837][SPARK-7114][SQL] support using keyword in column name |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-06 13:26:46 -0700 |
| Commit: 0e19464, github.com/apache/spark/pull/7237 |
| |
| [SPARK-8124] [SPARKR] Created more examples on SparkR DataFrames |
| Daniel Emaasit (PhD Student) <daniel.emaasit@gmail.com> |
| 2015-07-06 10:36:02 -0700 |
| Commit: 293225e, github.com/apache/spark/pull/6668 |
| |
| [SPARK-8841] [SQL] Fix partition pruning percentage log message |
| Steve Lindemann <steve.lindemann@engineersgatelp.com> |
| 2015-07-06 10:17:05 -0700 |
| Commit: 39e4e7e, github.com/apache/spark/pull/7227 |
| |
| [SPARK-8831][SQL] Support AbstractDataType in TypeCollection. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-05 23:54:25 -0700 |
| Commit: 86768b7, github.com/apache/spark/pull/7232 |
| |
| [SQL][Minor] Update the DataFrame API for encode/decode |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-05 21:50:52 -0700 |
| Commit: 6d0411b, github.com/apache/spark/pull/7230 |
| |
| [SPARK-8549] [SPARKR] Fix the line length of SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-05 20:50:02 -0700 |
| Commit: a0cb111, github.com/apache/spark/pull/7204 |
| |
| [SPARK-7137] [ML] Update SchemaUtils checkInputColumn to print more info if needed |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-07-05 12:58:03 -0700 |
| Commit: f9c448d, github.com/apache/spark/pull/5992 |
| |
| [MINOR] [SQL] Minor fix for CatalystSchemaConverter |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-07-04 22:52:50 -0700 |
| Commit: 2b820f2, github.com/apache/spark/pull/7224 |
| |
| [SPARK-8822][SQL] clean up type checking in math.scala. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-04 11:55:20 -0700 |
| Commit: c991ef5, github.com/apache/spark/pull/7220 |
| |
| [SQL] More unit tests for implicit type cast & add simpleString to AbstractDataType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-04 11:55:04 -0700 |
| Commit: 347cab8, github.com/apache/spark/pull/7221 |
| |
| Fixed minor style issue with the previous merge. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-04 01:11:35 -0700 |
| Commit: 48f7aed |
| |
| [SPARK-8270][SQL] levenshtein distance |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-04 01:10:52 -0700 |
| Commit: 6b3574e, github.com/apache/spark/pull/7214 |
| |
| [SPARK-8238][SPARK-8239][SPARK-8242][SPARK-8243][SPARK-8268][SQL]Add ascii/base64/unbase64/encode/decode functions |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-07-03 23:45:21 -0700 |
| Commit: f35b0c3, github.com/apache/spark/pull/6843 |
| |
| [SPARK-8777] [SQL] Add random data generator test utilities to Spark SQL |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-03 23:05:17 -0700 |
| Commit: f32487b, github.com/apache/spark/pull/7176 |
| |
| [SPARK-8192] [SPARK-8193] [SQL] udf current_date, current_timestamp |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-07-03 22:19:43 -0700 |
| Commit: 9fb6b83, github.com/apache/spark/pull/6985 |
| |
| [SPARK-8572] [SQL] Type coercion for ScalaUDFs |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-07-03 22:14:21 -0700 |
| Commit: 4a22bce, github.com/apache/spark/pull/7203 |
| |
| [SPARK-8810] [SQL] Added several UDF unit tests for Spark SQL |
| Spiro Michaylov <spiro@michaylov.com> |
| 2015-07-03 20:15:58 -0700 |
| Commit: e92c24d, github.com/apache/spark/pull/7207 |
| |
| [SPARK-7401] [MLLIB] [PYSPARK] Vectorize dot product and sq_dist between SparseVector and DenseVector |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-03 15:49:32 -0700 |
| Commit: f0fac2a, github.com/apache/spark/pull/5946 |
| |
| [SPARK-8226] [SQL] Add function shiftrightunsigned |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-03 15:39:16 -0700 |
| Commit: ab535b9, github.com/apache/spark/pull/7035 |
| |
| [SPARK-8809][SQL] Remove ConvertNaNs analyzer rule. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-03 00:25:02 -0700 |
| Commit: 2848f4d, github.com/apache/spark/pull/7206 |
| |
| [SPARK-8803] handle special characters in elements in crosstab |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-07-02 22:10:24 -0700 |
| Commit: 9b23e92, github.com/apache/spark/pull/7201 |
| |
| [SPARK-8776] Increase the default MaxPermSize |
| Yin Huai <yhuai@databricks.com> |
| 2015-07-02 22:09:07 -0700 |
| Commit: f743c79, github.com/apache/spark/pull/7196 |
| |
| [SPARK-8801][SQL] Support TypeCollection in ExpectsInputTypes |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-02 21:45:25 -0700 |
| Commit: a59d14f, github.com/apache/spark/pull/7202 |
| |
| [SPARK-8501] [SQL] Avoids reading schema from empty ORC files |
| Cheng Lian <lian@databricks.com> |
| 2015-07-02 21:30:57 -0700 |
| Commit: 20a4d7d, github.com/apache/spark/pull/7199 |
| |
| Minor style fix for the previous commit. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-02 20:47:04 -0700 |
| Commit: dfd8bac |
| |
| [SPARK-8213][SQL]Add function factorial |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-02 20:37:31 -0700 |
| Commit: 1a7a7d7, github.com/apache/spark/pull/6822 |
| |
| [SPARK-6980] [CORE] Akka timeout exceptions indicate which conf controls them (RPC Layer) |
| Bryan Cutler <bjcutler@us.ibm.com>, Harsh Gupta <harsh@Harshs-MacBook-Pro.local>, BryanCutler <cutlerb@gmail.com> |
| 2015-07-02 21:38:21 -0500 |
| Commit: aa7bbc1, github.com/apache/spark/pull/6205 |
| |
| [SPARK-8782] [SQL] Fix code generation for ORDER BY NULL |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-02 18:07:09 -0700 |
| Commit: d983819, github.com/apache/spark/pull/7179 |
| |
| Revert "[SPARK-8784] [SQL] Add Python API for hex and unhex" |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-02 16:25:10 -0700 |
| Commit: e589e71 |
| |
| [SPARK-7104] [MLLIB] Support model save/load in Python's Word2Vec |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-07-02 15:55:16 -0700 |
| Commit: 488bad3, github.com/apache/spark/pull/6821 |
| |
| [SPARK-8784] [SQL] Add Python API for hex and unhex |
| Davies Liu <davies@databricks.com> |
| 2015-07-02 15:43:02 -0700 |
| Commit: fc7aebd, github.com/apache/spark/pull/7181 |
| |
| [SPARK-3382] [MLLIB] GradientDescent convergence tolerance |
| lewuathe <lewuathe@me.com> |
| 2015-07-02 15:00:13 -0700 |
| Commit: 7d9cc96, github.com/apache/spark/pull/3636 |
| |
| [SPARK-8772][SQL] Implement implicit type cast for expressions that define input types. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-02 14:16:14 -0700 |
| Commit: 52508be, github.com/apache/spark/pull/7175 |
| |
| [SPARK-7835] Refactor HeartbeatReceiverSuite for coverage + cleanup |
| Andrew Or <andrew@databricks.com> |
| 2015-07-02 13:59:56 -0700 |
| Commit: cd20355, github.com/apache/spark/pull/7173 |
| |
| [SPARK-1564] [DOCS] Added Javascript to Javadocs to create badges for tags like :: Experimental :: |
| Deron Eriksson <deron@us.ibm.com> |
| 2015-07-02 13:55:53 -0700 |
| Commit: fcbcba6, github.com/apache/spark/pull/7169 |
| |
| [SPARK-8781] Fix variables in published pom.xml are not resolved |
| Andrew Or <andrew@databricks.com> |
| 2015-07-02 13:49:45 -0700 |
| Commit: 82cf331, github.com/apache/spark/pull/7193 |
| |
| [SPARK-8479] [MLLIB] Add numNonzeros and numActives to linalg.Matrices |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-07-02 11:28:14 -0700 |
| Commit: 34d448d, github.com/apache/spark/pull/6904 |
| |
| [SPARK-8581] [SPARK-8584] Simplify checkpointing code + better error message |
| Andrew Or <andrew@databricks.com> |
| 2015-07-02 10:57:02 -0700 |
| Commit: 2e2f326, github.com/apache/spark/pull/6968 |
| |
| [SPARK-8708] [MLLIB] Paritition ALS ratings based on both users and products |
| Liang-Chi Hsieh <viirya@gmail.com>, Liang-Chi Hsieh <viirya@appier.com> |
| 2015-07-02 10:18:23 -0700 |
| Commit: 0e553a3, github.com/apache/spark/pull/7121 |
| |
| [SPARK-8407] [SQL] complex type constructors: struct and named_struct |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-07-02 10:12:25 -0700 |
| Commit: 52302a8, github.com/apache/spark/pull/6874 |
| |
| [SPARK-8747] [SQL] fix EqualNullSafe for binary type |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-02 10:06:38 -0700 |
| Commit: afa021e, github.com/apache/spark/pull/7143 |
| |
| [SPARK-8223] [SPARK-8224] [SQL] shift left and shift right |
| Tarek Auel <tarek.auel@googlemail.com> |
| 2015-07-02 10:02:19 -0700 |
| Commit: 5b33381, github.com/apache/spark/pull/7178 |
| |
| [SPARK-8758] [MLLIB] Add Python user guide for PowerIterationClustering |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-07-02 09:59:54 -0700 |
| Commit: 0a468a4, github.com/apache/spark/pull/7155 |
| |
| [SPARK-8647] [MLLIB] Potential issue with constant hashCode |
| Alok Singh <singhal@Aloks-MacBook-Pro.local> |
| 2015-07-02 09:58:57 -0700 |
| Commit: 99c40cd, github.com/apache/spark/pull/7146 |
| |
| [SPARK-8690] [SQL] Add a setting to disable SparkSQL parquet schema merge by using datasource API |
| Wisely Chen <wiselychen@appier.com> |
| 2015-07-02 09:58:12 -0700 |
| Commit: 246265f, github.com/apache/spark/pull/7070 |
| |
| [SPARK-8746] [SQL] update download link for Hive 0.13.1 |
| Christian Kadner <ckadner@us.ibm.com> |
| 2015-07-02 13:45:19 +0100 |
| Commit: 1bbdf9e, github.com/apache/spark/pull/7144 |
| |
| [SPARK-8787] [SQL] Changed parameter order of @deprecated in package object sql |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-07-02 13:42:48 +0100 |
| Commit: c572e25, github.com/apache/spark/pull/7183 |
| |
| [DOCS] Fix minor wrong lambda expression example. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-07-02 21:16:35 +0900 |
| Commit: 4158836, github.com/apache/spark/pull/7187 |
| |
| [SPARK-8687] [YARN] Fix bug: Executor can't fetch the new set configuration in yarn-client |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-07-01 23:14:13 -0700 |
| Commit: 1b0c8e6, github.com/apache/spark/pull/7066 |
| |
| [SPARK-3071] Increase default driver memory |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-07-01 23:11:02 -0700 |
| Commit: 3697232, github.com/apache/spark/pull/7132 |
| |
| [SPARK-8740] [PROJECT INFRA] Support GitHub OAuth tokens in dev/merge_spark_pr.py |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-07-01 23:06:52 -0700 |
| Commit: 377ff4c, github.com/apache/spark/pull/7136 |
| |
| [SPARK-8769] [TRIVIAL] [DOCS] toLocalIterator should mention it results in many jobs |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-01 23:05:45 -0700 |
| Commit: 15d41cc, github.com/apache/spark/pull/7171 |
| |
| [SPARK-8771] [TRIVIAL] Add a version to the deprecated annotation for the actorSystem |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-07-01 23:04:05 -0700 |
| Commit: d14338e, github.com/apache/spark/pull/7172 |
| |
| [SPARK-8688] [YARN] Bug fix: disable the cache fs to gain the HDFS connection. |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-07-01 23:01:44 -0700 |
| Commit: 646366b, github.com/apache/spark/pull/7069 |
| |
| [SPARK-8754] [YARN] YarnClientSchedulerBackend doesn't stop gracefully in failure conditions |
| Devaraj K <devaraj@apache.org> |
| 2015-07-01 22:59:04 -0700 |
| Commit: 792fcd8, github.com/apache/spark/pull/7153 |
| |
| [SPARK-8227] [SQL] Add function unhex |
| zhichao.li <zhichao.li@intel.com> |
| 2015-07-01 22:19:51 -0700 |
| Commit: b285ac5, github.com/apache/spark/pull/7113 |
| |
| [SPARK-8660] [MLLIB] removed > symbols from comments in LogisticRegressionSuite.scala for ease of copypaste |
| Rosstin <asterazul@gmail.com> |
| 2015-07-01 21:42:06 -0700 |
| Commit: 4e4f74b, github.com/apache/spark/pull/7167 |
| |
| [SPARK-8770][SQL] Create BinaryOperator abstract class. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 21:14:13 -0700 |
| Commit: 9fd13d5, github.com/apache/spark/pull/7174 |
| |
| Revert "[SPARK-8770][SQL] Create BinaryOperator abstract class." |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 16:59:39 -0700 |
| Commit: 3a342de |
| |
| [SPARK-8770][SQL] Create BinaryOperator abstract class. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 16:56:48 -0700 |
| Commit: 2727789, github.com/apache/spark/pull/7170 |
| |
| [SPARK-8766] support non-ascii character in column names |
| Davies Liu <davies@databricks.com> |
| 2015-07-01 16:43:18 -0700 |
| Commit: f958f27, github.com/apache/spark/pull/7165 |
| |
| [SPARK-3444] [CORE] Restore INFO level after log4j test. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-07-01 20:40:47 +0100 |
| Commit: 1ce6428, github.com/apache/spark/pull/7140 |
| |
| [QUICKFIX] [SQL] fix copy of generated row |
| Davies Liu <davies@databricks.com> |
| 2015-07-01 12:39:57 -0700 |
| Commit: 3083e17, github.com/apache/spark/pull/7163 |
| |
| [SPARK-7820] [BUILD] Fix Java8-tests suite compile and test error under sbt |
| jerryshao <saisai.shao@intel.com> |
| 2015-07-01 12:33:24 -0700 |
| Commit: 9f7db34, github.com/apache/spark/pull/7120 |
| |
| [SPARK-8378] [STREAMING] Add the Python API for Flume |
| zsxwing <zsxwing@gmail.com> |
| 2015-07-01 11:59:24 -0700 |
| Commit: 75b9fe4, github.com/apache/spark/pull/6830 |
| |
| [SPARK-8765] [MLLIB] [PYTHON] removed flaky python PIC test |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-07-01 11:57:52 -0700 |
| Commit: b8faa32, github.com/apache/spark/pull/7164 |
| |
| [SPARK-8308] [MLLIB] add missing save load for python example |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-07-01 11:17:56 -0700 |
| Commit: 2012913, github.com/apache/spark/pull/6760 |
| |
| [SPARK-6263] [MLLIB] Python MLlib API missing items: Utils |
| lewuathe <lewuathe@me.com> |
| 2015-07-01 11:14:07 -0700 |
| Commit: 184de91, github.com/apache/spark/pull/5707 |
| |
| [SPARK-8621] [SQL] support empty string as column name |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-01 10:31:35 -0700 |
| Commit: 31b4a3d, github.com/apache/spark/pull/7149 |
| |
| [SPARK-8752][SQL] Add ExpectsInputTypes trait for defining expected input types. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 10:30:54 -0700 |
| Commit: 4137f76, github.com/apache/spark/pull/7151 |
| |
| [SPARK-7714] [SPARKR] SparkR tests should use more specific expectations than expect_true |
| Sun Rui <rui.sun@intel.com> |
| 2015-07-01 09:50:12 -0700 |
| Commit: 69c5dee, github.com/apache/spark/pull/7152 |
| |
| [SPARK-8763] [PYSPARK] executing run-tests.py with Python 2.6 fails with absence of subprocess.check_output function |
| cocoatomo <cocoatomo77@gmail.com> |
| 2015-07-01 09:37:09 -0700 |
| Commit: fdcad6e, github.com/apache/spark/pull/7161 |
| |
| [SPARK-8750][SQL] Remove the closure in functions.callUdf. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 01:08:20 -0700 |
| Commit: 9765241, github.com/apache/spark/pull/7148 |
| |
| [SQL] [MINOR] remove internalRowRDD in DataFrame |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-07-01 01:02:33 -0700 |
| Commit: 0eee061, github.com/apache/spark/pull/7116 |
| |
| [SPARK-8749][SQL] Remove HiveTypeCoercion trait. |
| Reynold Xin <rxin@databricks.com> |
| 2015-07-01 00:08:16 -0700 |
| Commit: fc3a6fe, github.com/apache/spark/pull/7147 |
| |
| [SPARK-8748][SQL] Move castability test out from Cast case class into Cast object. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-30 23:04:54 -0700 |
| Commit: 365c140, github.com/apache/spark/pull/7145 |
| |
| [SPARK-6602][Core]Remove unnecessary synchronized |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-30 21:57:07 -0700 |
| Commit: 64c1461, github.com/apache/spark/pull/7141 |
| |
| [SPARK-8535] [PYSPARK] PySpark : Can't create DataFrame from Pandas dataframe with no explicit column name |
| x1- <viva008@gmail.com> |
| 2015-06-30 20:35:46 -0700 |
| Commit: b6e76ed, github.com/apache/spark/pull/7124 |
| |
| [SPARK-8471] [ML] Rename DiscreteCosineTransformer to DCT |
| Feynman Liang <fliang@databricks.com> |
| 2015-06-30 20:19:43 -0700 |
| Commit: f457569, github.com/apache/spark/pull/7138 |
| |
| [SPARK-6602][Core] Update Master, Worker, Client, AppClient and related classes to use RpcEndpoint |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-30 17:39:55 -0700 |
| Commit: 3bee0f1, github.com/apache/spark/pull/5392 |
| |
| [SPARK-8727] [SQL] Missing python api; md5, log2 |
| Tarek Auel <tarek.auel@gmail.com>, Tarek Auel <tarek.auel@googlemail.com> |
| 2015-06-30 16:59:44 -0700 |
| Commit: ccdb052, github.com/apache/spark/pull/7114 |
| |
| [SPARK-8741] [SQL] Remove e and pi from DataFrame functions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-30 16:54:51 -0700 |
| Commit: 8133125, github.com/apache/spark/pull/7137 |
| |
| [SPARK-7739] [MLLIB] Improve ChiSqSelector example code in user guide |
| sethah <seth.hendrickson16@gmail.com> |
| 2015-06-30 16:28:25 -0700 |
| Commit: 8d23587, github.com/apache/spark/pull/7029 |
| |
| [SPARK-8738] [SQL] [PYSPARK] capture SQL AnalysisException in Python API |
| Davies Liu <davies@databricks.com> |
| 2015-06-30 16:17:46 -0700 |
| Commit: 58ee2a2, github.com/apache/spark/pull/7135 |
| |
| [SPARK-8739] [WEB UI] [WINDOWS] A illegal character `\r` can be contained in StagePage. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-06-30 14:09:29 -0700 |
| Commit: d2495f7, github.com/apache/spark/pull/7133 |
| |
| [SPARK-8563] [MLLIB] Fixed a bug so that IndexedRowMatrix.computeSVD().U.numCols = k |
| lee19 <lee19@live.co.kr> |
| 2015-06-30 14:08:00 -0700 |
| Commit: e725262, github.com/apache/spark/pull/6953 |
| |
| [SPARK-8705] [WEBUI] Don't display rects when totalExecutionTime is 0 |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-30 14:06:50 -0700 |
| Commit: 8c89896, github.com/apache/spark/pull/7088 |
| |
| [SPARK-8736] [ML] GBTRegressor should not threshold prediction |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-06-30 14:02:50 -0700 |
| Commit: 3ba23ff, github.com/apache/spark/pull/7134 |
| |
| [SPARK-8372] Do not show applications that haven't recorded their app ID yet. |
| Marcelo Vanzin <vanzin@cloudera.com>, Carson Wang <carson.wang@intel.com> |
| 2015-06-30 14:01:52 -0700 |
| Commit: 4bb8375, github.com/apache/spark/pull/7097 |
| |
| [SPARK-2645] [CORE] Allow SparkEnv.stop() to be called multiple times without side effects. |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-06-30 14:00:35 -0700 |
| Commit: 7dda084, github.com/apache/spark/pull/6973 |
| |
| [SPARK-8560] [UI] The Executors page will have negative if having resubmitted tasks |
| xutingjun <xutingjun@huawei.com> |
| 2015-06-30 13:56:59 -0700 |
| Commit: 79f0b37, github.com/apache/spark/pull/6950 |
| |
| [SPARK-7514] [MLLIB] Add MinMaxScaler to feature transformation |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-06-30 12:44:43 -0700 |
| Commit: 61d7b53, github.com/apache/spark/pull/6039 |
| |
| [SPARK-8471] [ML] Discrete Cosine Transform Feature Transformer |
| Feynman Liang <fliang@databricks.com> |
| 2015-06-30 12:31:33 -0700 |
| Commit: 74cc16d, github.com/apache/spark/pull/6894 |
| |
| [SPARK-8628] [SQL] Race condition in AbstractSparkSQLParser.parse |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-06-30 12:24:47 -0700 |
| Commit: b8e5bb6, github.com/apache/spark/pull/7015 |
| |
| [SPARK-8664] [ML] Add PCA transformer |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-30 12:23:48 -0700 |
| Commit: c1befd7, github.com/apache/spark/pull/7065 |
| |
| [SPARK-6785] [SQL] fix DateTimeUtils for dates before 1970 |
| Christian Kadner <ckadner@us.ibm.com> |
| 2015-06-30 12:22:34 -0700 |
| Commit: 1e1f339, github.com/apache/spark/pull/6983 |
| |
| [SPARK-8619] [STREAMING] Don't recover keytab and principal configuration within Streaming checkpoint |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-06-30 11:46:22 -0700 |
| Commit: d16a944, github.com/apache/spark/pull/7008 |
| |
| [SPARK-8630] [STREAMING] Prevent from checkpointing QueueInputDStream |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-30 11:14:38 -0700 |
| Commit: 5726440, github.com/apache/spark/pull/7016 |
| |
| [SPARK-7988] [STREAMING] Round-robin scheduling of receivers by default |
| nishkamravi2 <nishkamravi@gmail.com>, Nishkam Ravi <nravi@cloudera.com> |
| 2015-06-30 11:12:15 -0700 |
| Commit: ca7e460, github.com/apache/spark/pull/6607 |
| |
| [SPARK-8615] [DOCUMENTATION] Fixed Sample deprecated code |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-06-30 10:50:45 -0700 |
| Commit: 9213f73, github.com/apache/spark/pull/7039 |
| |
| [SPARK-8713] Make codegen thread safe |
| Davies Liu <davies@databricks.com> |
| 2015-06-30 10:48:49 -0700 |
| Commit: fbb267e, github.com/apache/spark/pull/7101 |
| |
| [SPARK-8679] [PYSPARK] [MLLIB] Default values in Pipeline API should be immutable |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-30 10:27:29 -0700 |
| Commit: 5fa0863, github.com/apache/spark/pull/7058 |
| |
| [SPARK-4127] [MLLIB] [PYSPARK] Python bindings for StreamingLinearRegressionWithSGD |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-30 10:25:59 -0700 |
| Commit: 4528166, github.com/apache/spark/pull/6744 |
| |
| [SPARK-8437] [DOCS] Corrected: Using directory path without wildcard for filename slow for large number of files with wholeTextFiles and binaryFiles |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-30 10:07:26 -0700 |
| Commit: ada384b, github.com/apache/spark/pull/7126 |
| |
| [SPARK-8592] [CORE] CoarseGrainedExecutorBackend: Cannot register with driver => NPE |
| xuchenCN <chenxu198511@gmail.com> |
| 2015-06-30 10:05:51 -0700 |
| Commit: 689da28, github.com/apache/spark/pull/7110 |
| |
| [SPARK-8236] [SQL] misc functions: crc32 |
| Shilei <shilei.qian@intel.com> |
| 2015-06-30 09:49:58 -0700 |
| Commit: 722aa5f, github.com/apache/spark/pull/7108 |
| |
| [SPARK-8680] [SQL] Slightly improve PropagateTypes |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-30 08:17:24 -0700 |
| Commit: a48e619, github.com/apache/spark/pull/7087 |
| |
| [SPARK-8723] [SQL] improve divide and remainder code gen |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-30 08:08:15 -0700 |
| Commit: 865a834, github.com/apache/spark/pull/7111 |
| |
| [SPARK-8590] [SQL] add code gen for ExtractValue |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-30 07:58:49 -0700 |
| Commit: 08fab48, github.com/apache/spark/pull/6982 |
| |
| [SPARK-7756] [CORE] More robust SSL options processing. |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-06-30 13:49:52 +0100 |
| Commit: 2ed0c0a, github.com/apache/spark/pull/7043 |
| |
| [SPARK-8551] [ML] Elastic net python code example |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-06-29 23:50:34 -0700 |
| Commit: 5452457, github.com/apache/spark/pull/6946 |
| |
| [SPARK-8434][SQL]Add a "pretty" parameter to the "show" method to display long strings |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-29 23:44:11 -0700 |
| Commit: 12671dd, github.com/apache/spark/pull/6877 |
| |
| [SPARK-5161] [HOTFIX] Fix bug in Python test failure reporting |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-29 23:08:51 -0700 |
| Commit: 6c5a6db, github.com/apache/spark/pull/7112 |
| |
| [SPARK-8650] [SQL] Use the user-specified app name priority in SparkSQLCLIDriver or HiveThriftServer2 |
| Yadong Qi <qiyadong2010@gmail.com> |
| 2015-06-29 22:34:38 -0700 |
| Commit: e6c3f74, github.com/apache/spark/pull/7030 |
| |
| [SPARK-8721][SQL] Rename ExpectsInputTypes => AutoCastInputTypes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-29 22:32:43 -0700 |
| Commit: f79410c, github.com/apache/spark/pull/7109 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-29 21:41:59 -0700 |
| Commit: ea775b0, github.com/apache/spark/pull/1767 |
| |
| [SPARK-5161] Parallelize Python test execution |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-29 21:32:40 -0700 |
| Commit: 7bbbe38, github.com/apache/spark/pull/7031 |
| |
| [SPARK-7667] [MLLIB] MLlib Python API consistency check |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-29 18:50:23 -0700 |
| Commit: f9b6bf2, github.com/apache/spark/pull/6856 |
| |
| [SPARK-8669] [SQL] Fix crash with BINARY (ENUM) fields with Parquet 1.7 |
| Steven She <steven@canopylabs.com> |
| 2015-06-29 18:50:09 -0700 |
| Commit: 4915e9e, github.com/apache/spark/pull/7048 |
| |
| [SPARK-8715] ArrayOutOfBoundsException fixed for DataFrameStatSuite.crosstab |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-06-29 18:48:28 -0700 |
| Commit: ecacb1e, github.com/apache/spark/pull/7100 |
| |
| [SPARK-8456] [ML] Ngram featurizer python |
| Feynman Liang <fliang@databricks.com> |
| 2015-06-29 18:40:30 -0700 |
| Commit: 620605a, github.com/apache/spark/pull/6960 |
| |
| Revert "[SPARK-8437] [DOCS] Using directory path without wildcard for filename slow for large number of files with wholeTextFiles and binaryFiles" |
| Andrew Or <andrew@databricks.com> |
| 2015-06-29 18:32:31 -0700 |
| Commit: 4c1808b |
| |
| [SPARK-8019] [SPARKR] Support SparkR spawning worker R processes with a command other then Rscript |
| Michael Sannella x268 <msannell@tibco.com> |
| 2015-06-29 17:28:28 -0700 |
| Commit: 4a9e03f, github.com/apache/spark/pull/6557 |
| |
| [SPARK-8410] [SPARK-8475] remove previous ivy resolution when using spark-submit |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-06-29 17:27:02 -0700 |
| Commit: d7f796d, github.com/apache/spark/pull/7089 |
| |
| [SPARK-8437] [DOCS] Using directory path without wildcard for filename slow for large number of files with wholeTextFiles and binaryFiles |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-29 17:21:35 -0700 |
| Commit: 5d30eae, github.com/apache/spark/pull/7036 |
| |
| [SPARK-7287] [SPARK-8567] [TEST] Add sc.stop to applications in SparkSubmitSuite |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-29 17:20:05 -0700 |
| Commit: fbf7573, github.com/apache/spark/pull/7027 |
| |
| [SPARK-8634] [STREAMING] [TESTS] Fix flaky test StreamingListenerSuite "receiver info reporting" |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-29 17:19:05 -0700 |
| Commit: cec9852, github.com/apache/spark/pull/7017 |
| |
| [SPARK-8589] [SQL] cleanup DateTimeUtils |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-29 16:34:50 -0700 |
| Commit: 881662e, github.com/apache/spark/pull/6980 |
| |
| [SPARK-8710] [SQL] Change ScalaReflection.mirror from a val to a def. |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-29 16:26:05 -0700 |
| Commit: 4b497a7, github.com/apache/spark/pull/7094 |
| |
| [SPARK-8661][ML] for LinearRegressionSuite.scala, changed javadoc-style comments to regular multiline comments, to make copy-pasting R code more simple |
| Rosstin <asterazul@gmail.com> |
| 2015-06-29 16:09:29 -0700 |
| Commit: 4e880cf, github.com/apache/spark/pull/7098 |
| |
| [SPARK-8579] [SQL] support arbitrary object in UnsafeRow |
| Davies Liu <davies@databricks.com> |
| 2015-06-29 15:59:20 -0700 |
| Commit: ed359de, github.com/apache/spark/pull/6959 |
| |
| [SPARK-8478] [SQL] Harmonize UDF-related code to use uniformly UDF instead of Udf |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-06-29 15:27:13 -0700 |
| Commit: 931da5c, github.com/apache/spark/pull/6920 |
| |
| [SPARK-8660][ML] Convert JavaDoc style comments inLogisticRegressionSuite.scala to regular multiline comments, to make copy-pasting R commands easier |
| Rosstin <asterazul@gmail.com> |
| 2015-06-29 14:45:08 -0700 |
| Commit: c8ae887, github.com/apache/spark/pull/7096 |
| |
| [SPARK-7810] [PYSPARK] solve python rdd socket connection problem |
| Ai He <ai.he@ussuning.com>, AiHe <ai.he@ussuning.com> |
| 2015-06-29 14:36:26 -0700 |
| Commit: ecd3aac, github.com/apache/spark/pull/6338 |
| |
| [SPARK-8056][SQL] Design an easier way to construct schema for both Scala and Python |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-06-29 14:15:15 -0700 |
| Commit: f6fc254, github.com/apache/spark/pull/6686 |
| |
| [SPARK-8709] Exclude hadoop-client's mockito-all dependency |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-29 14:07:55 -0700 |
| Commit: 27ef854, github.com/apache/spark/pull/7090 |
| |
| [SPARK-8070] [SQL] [PYSPARK] avoid spark jobs in createDataFrame |
| Davies Liu <davies@databricks.com> |
| 2015-06-29 13:20:55 -0700 |
| Commit: afae976, github.com/apache/spark/pull/6606 |
| |
| [SPARK-8681] fixed wrong ordering of columns in crosstab |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-06-29 13:15:04 -0700 |
| Commit: be7ef06, github.com/apache/spark/pull/7060 |
| |
| [SPARK-7862] [SQL] Disable the error message redirect to stderr |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-29 12:46:33 -0700 |
| Commit: c6ba2ea, github.com/apache/spark/pull/6882 |
| |
| [SPARK-8214] [SQL] Add function hex |
| zhichao.li <zhichao.li@intel.com> |
| 2015-06-29 12:25:16 -0700 |
| Commit: 637b4ee, github.com/apache/spark/pull/6976 |
| |
| [SQL][DOCS] Remove wrong example from DataFrame.scala |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-06-29 12:16:12 -0700 |
| Commit: 94e040d, github.com/apache/spark/pull/6977 |
| |
| [SPARK-8528] Expose SparkContext.applicationId in PySpark |
| Vladimir Vladimirov <vladimir.vladimirov@magnetic.com> |
| 2015-06-29 12:03:41 -0700 |
| Commit: 492dca3, github.com/apache/spark/pull/6936 |
| |
| [SPARK-8235] [SQL] misc function sha / sha1 |
| Tarek Auel <tarek.auel@gmail.com>, Tarek Auel <tarek.auel@googlemail.com> |
| 2015-06-29 11:57:19 -0700 |
| Commit: a5c2961, github.com/apache/spark/pull/6963 |
| |
| [SPARK-8066, SPARK-8067] [hive] Add support for Hive 1.0, 1.1 and 1.2. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-29 11:53:17 -0700 |
| Commit: 3664ee2, github.com/apache/spark/pull/7026 |
| |
| [SPARK-8692] [SQL] re-order the case statements that handling catalyst data types |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-29 11:41:26 -0700 |
| Commit: ed413bc, github.com/apache/spark/pull/7073 |
| |
| Revert "[SPARK-8372] History server shows incorrect information for application not started" |
| Andrew Or <andrew@databricks.com> |
| 2015-06-29 10:52:05 -0700 |
| Commit: ea88b1a |
| |
| [SPARK-8554] Add the SparkR document files to `.rat-excludes` for `./dev/check-license` |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-29 09:22:55 -0700 |
| Commit: 715f084, github.com/apache/spark/pull/6947 |
| |
| [SPARK-8693] [PROJECT INFRA] profiles and goals are not printed in a nice way |
| Brennon York <brennon.york@capitalone.com> |
| 2015-06-29 08:55:06 -0700 |
| Commit: 5c796d5, github.com/apache/spark/pull/7085 |
| |
| [SPARK-8702] [WEBUI] Avoid massive concating strings in Javascript |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-30 00:46:55 +0900 |
| Commit: 630bd5f, github.com/apache/spark/pull/7082 |
| |
| [SPARK-8698] partitionBy in Python DataFrame reader/writer interface should not default to empty tuple. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-29 00:22:44 -0700 |
| Commit: 660c6ce, github.com/apache/spark/pull/7079 |
| |
| [SPARK-8355] [SQL] Python DataFrameReader/Writer should mirror Scala |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-06-29 00:13:39 -0700 |
| Commit: ac2e17b, github.com/apache/spark/pull/7078 |
| |
| [SPARK-8575] [SQL] Deprecate callUDF in favor of udf |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-06-28 22:43:47 -0700 |
| Commit: 0b10662, github.com/apache/spark/pull/6993 |
| |
| [SPARK-5962] [MLLIB] Python support for Power Iteration Clustering |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-28 22:38:04 -0700 |
| Commit: dfde31d, github.com/apache/spark/pull/6992 |
| |
| [SPARK-7212] [MLLIB] Add sequence learning flag |
| Feynman Liang <fliang@databricks.com> |
| 2015-06-28 22:26:07 -0700 |
| Commit: 25f574e, github.com/apache/spark/pull/6997 |
| |
| [SPARK-7845] [BUILD] Bumping default Hadoop version used in profile hadoop-1 to 1.2.1 |
| Cheng Lian <lian@databricks.com> |
| 2015-06-28 19:34:59 -0700 |
| Commit: 00a9d22, github.com/apache/spark/pull/7062 |
| |
| [SPARK-8677] [SQL] Fix non-terminating decimal expansion for decimal divide operation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-28 14:48:44 -0700 |
| Commit: 24fda73, github.com/apache/spark/pull/7056 |
| |
| [SPARK-8596] [EC2] Added port for Rstudio |
| Vincent D. Warmerdam <vincentwarmerdam@gmail.com>, vincent <vincentwarmerdam@gmail.com> |
| 2015-06-28 13:33:33 -0700 |
| Commit: 9ce78b4, github.com/apache/spark/pull/7068 |
| |
| [SPARK-8686] [SQL] DataFrame should support `where` with expression represented by String |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-06-28 08:29:07 -0700 |
| Commit: ec78438, github.com/apache/spark/pull/7063 |
| |
| [SPARK-8610] [SQL] Separate Row and InternalRow (part 2) |
| Davies Liu <davies@databricks.com> |
| 2015-06-28 08:03:58 -0700 |
| Commit: 77da5be, github.com/apache/spark/pull/7003 |
| |
| [SPARK-8649] [BUILD] Mapr repository is not defined properly |
| Thomas Szymanski <develop@tszymanski.com> |
| 2015-06-28 01:06:49 -0700 |
| Commit: 52d1281, github.com/apache/spark/pull/7054 |
| |
| [SPARK-8683] [BUILD] Depend on mockito-core instead of mockito-all |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-27 23:27:52 -0700 |
| Commit: f510045, github.com/apache/spark/pull/7061 |
| |
| [HOTFIX] Fix pull request builder bug in #6967 |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-27 23:07:20 -0700 |
| Commit: 42db3a1 |
| |
| [SPARK-8583] [SPARK-5482] [BUILD] Refactor python/run-tests to integrate with dev/run-tests module system |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-27 20:24:34 -0700 |
| Commit: 40648c5, github.com/apache/spark/pull/6967 |
| |
| [SPARK-8606] Prevent exceptions in RDD.getPreferredLocations() from crashing DAGScheduler |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-27 14:40:45 -0700 |
| Commit: 0b5abbf, github.com/apache/spark/pull/7023 |
| |
| [SPARK-8623] Hadoop RDDs fail to properly serialize configuration |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-06-27 14:33:31 -0700 |
| Commit: 4153776, github.com/apache/spark/pull/7050 |
| |
| [SPARK-3629] [YARN] [DOCS]: Improvement of the "Running Spark on YARN" document |
| Neelesh Srinivas Salian <nsalian@cloudera.com> |
| 2015-06-27 09:07:10 +0300 |
| Commit: d48e789, github.com/apache/spark/pull/6924 |
| |
| [SPARK-8639] [DOCS] Fixed Minor Typos in Documentation |
| Rosstin <asterazul@gmail.com> |
| 2015-06-27 08:47:00 +0300 |
| Commit: b5a6663, github.com/apache/spark/pull/7046 |
| |
| [SPARK-8607] SparkR -- jars not being added to application classpath correctly |
| cafreeman <cfreeman@alteryx.com> |
| 2015-06-26 17:06:02 -0700 |
| Commit: 9d11817, github.com/apache/spark/pull/7001 |
| |
| [SPARK-8662] SparkR Update SparkSQL Test |
| cafreeman <cfreeman@alteryx.com> |
| 2015-06-26 10:07:35 -0700 |
| Commit: a56516f, github.com/apache/spark/pull/7045 |
| |
| [SPARK-8652] [PYSPARK] Check return value for all uses of doctest.testmod() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-26 08:12:22 -0700 |
| Commit: 41afa16, github.com/apache/spark/pull/7032 |
| |
| [SPARK-8302] Support heterogeneous cluster install paths on YARN. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-26 08:45:22 -0500 |
| Commit: 37bf76a, github.com/apache/spark/pull/6752 |
| |
| [SPARK-8613] [ML] [TRIVIAL] add param to disable linear feature scaling |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-06-26 01:19:05 -0700 |
| Commit: c9e05a3, github.com/apache/spark/pull/7024 |
| |
| [SPARK-8344] Add message processing time metric to DAGScheduler |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-26 00:12:05 -0700 |
| Commit: 9fed6ab, github.com/apache/spark/pull/7002 |
| |
| [SPARK-8635] [SQL] improve performance of CatalystTypeConverters |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-25 22:44:26 -0700 |
| Commit: 1a79f0e, github.com/apache/spark/pull/7018 |
| |
| [SPARK-8620] [SQL] cleanup CodeGenContext |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-25 22:16:53 -0700 |
| Commit: 4036011, github.com/apache/spark/pull/7010 |
| |
| [SPARK-8237] [SQL] Add misc function sha2 |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-25 22:07:37 -0700 |
| Commit: 47c874b, github.com/apache/spark/pull/6934 |
| |
| [SPARK-8637] [SPARKR] [HOTFIX] Fix packages argument, sparkSubmitBinName |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-25 10:56:00 -0700 |
| Commit: c392a9e, github.com/apache/spark/pull/7022 |
| |
| [MINOR] [MLLIB] rename some functions of PythonMLLibAPI |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-25 08:13:17 -0700 |
| Commit: 2519dcc, github.com/apache/spark/pull/7011 |
| |
| [SPARK-8567] [SQL] Add logs to record the progress of HiveSparkSubmitSuite. |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-25 06:52:03 -0700 |
| Commit: f9b397f, github.com/apache/spark/pull/7009 |
| |
| [SPARK-8574] org/apache/spark/unsafe doesn't honor the java source/taā¦ |
| Tom Graves <tgraves@yahoo-inc.com>, Thomas Graves <tgraves@staydecay.corp.gq1.yahoo.com> |
| 2015-06-25 08:27:08 -0500 |
| Commit: e988adb, github.com/apache/spark/pull/6989 |
| |
| [SPARK-5768] [WEB UI] Fix for incorrect memory in Spark UI |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-06-25 20:21:34 +0900 |
| Commit: 085a721, github.com/apache/spark/pull/6972 |
| |
| [SPARK-8604] [SQL] HadoopFsRelation subclasses should set their output format class |
| Cheng Lian <lian@databricks.com> |
| 2015-06-25 00:06:23 -0700 |
| Commit: c337844, github.com/apache/spark/pull/6998 |
| |
| [SPARK-7884] Move block deserialization from BlockStoreShuffleFetcher to ShuffleReader |
| Matt Massie <massie@cs.berkeley.edu>, Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-06-24 22:09:31 -0700 |
| Commit: 7bac2fe, github.com/apache/spark/pull/6423 |
| |
| Two minor SQL cleanup (compiler warning & indent). |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-24 19:34:07 -0700 |
| Commit: 82f80c1, github.com/apache/spark/pull/7000 |
| |
| [SPARK-8075] [SQL] apply type check interface to more expressions |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-24 16:26:00 -0700 |
| Commit: b71d325, github.com/apache/spark/pull/6723 |
| |
| [SPARK-8567] [SQL] Increase the timeout of HiveSparkSubmitSuite |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-24 15:52:58 -0700 |
| Commit: 7daa702, github.com/apache/spark/pull/6957 |
| |
| [SPARK-8558] [BUILD] Script /dev/run-tests fails when _JAVA_OPTIONS env var set |
| fe2s <aka.fe2s@gmail.com>, Oleksiy Dyagilev <oleksiy_dyagilev@epam.com> |
| 2015-06-24 15:12:23 -0700 |
| Commit: dca21a8, github.com/apache/spark/pull/6956 |
| |
| [SPARK-6777] [SQL] Implements backwards compatibility rules in CatalystSchemaConverter |
| Cheng Lian <lian@databricks.com> |
| 2015-06-24 15:03:43 -0700 |
| Commit: 8ab5076, github.com/apache/spark/pull/6617 |
| |
| [SPARK-7633] [MLLIB] [PYSPARK] Python bindings for StreamingLogisticRegressionwithSGD |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-24 14:58:43 -0700 |
| Commit: fb32c38, github.com/apache/spark/pull/6849 |
| |
| [SPARK-7289] handle project -> limit -> sort efficiently |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-24 13:28:50 -0700 |
| Commit: f04b567, github.com/apache/spark/pull/6780 |
| |
| [SPARK-7088] [SQL] Fix analysis for 3rd party logical plan. |
| Santiago M. Mola <smola@stratio.com> |
| 2015-06-24 12:29:07 -0700 |
| Commit: b84d4b4, github.com/apache/spark/pull/6853 |
| |
| [SPARK-8506] Add pakages to R context created through init. |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-06-24 11:55:20 -0700 |
| Commit: 43e6619, github.com/apache/spark/pull/6928 |
| |
| [SPARK-8399] [STREAMING] [WEB UI] Overlap between histograms and axis' name in Spark Streaming UI |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-06-24 11:53:03 -0700 |
| Commit: 1173483, github.com/apache/spark/pull/6845 |
| |
| [SPARK-8576] Add spark-ec2 options to set IAM roles and instance-initiated shutdown behavior |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-06-24 11:20:51 -0700 |
| Commit: 31f48e5, github.com/apache/spark/pull/6962 |
| |
| [SPARK-8578] [SQL] Should ignore user defined output committer when appending data |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-24 09:50:03 -0700 |
| Commit: bba6699, github.com/apache/spark/pull/6964 |
| |
| [SPARK-8567] [SQL] Debugging flaky HiveSparkSubmitSuite |
| Cheng Lian <lian@databricks.com> |
| 2015-06-24 09:49:20 -0700 |
| Commit: 9d36ec2, github.com/apache/spark/pull/6978 |
| |
| [SPARK-8138] [SQL] Improves error message when conflicting partition columns are found |
| Cheng Lian <lian@databricks.com> |
| 2015-06-24 02:17:12 -0700 |
| Commit: cc465fd, github.com/apache/spark/pull/6610 |
| |
| [SPARK-8371] [SQL] improve unit test for MaxOf and MinOf and fix bugs |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-23 23:11:42 -0700 |
| Commit: 09fcf96, github.com/apache/spark/pull/6825 |
| |
| [HOTFIX] [BUILD] Fix MiMa checks in master branch; enable MiMa for launcher project |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-23 23:03:59 -0700 |
| Commit: 13ae806, github.com/apache/spark/pull/6974 |
| |
| [SPARK-6749] [SQL] Make metastore client robust to underlying socket connection loss |
| Eric Liang <ekl@databricks.com> |
| 2015-06-23 22:27:17 -0700 |
| Commit: 50c3a86, github.com/apache/spark/pull/6912 |
| |
| Revert "[SPARK-7157][SQL] add sampleBy to DataFrame" |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-23 19:30:25 -0700 |
| Commit: a458efc |
| |
| [SPARK-7157][SQL] add sampleBy to DataFrame |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-23 17:46:29 -0700 |
| Commit: 0401cba, github.com/apache/spark/pull/6769 |
| |
| [SPARK-8139] [SQL] Updates docs and comments of data sources and Parquet output committer options |
| Cheng Lian <lian@databricks.com> |
| 2015-06-23 17:24:26 -0700 |
| Commit: 111d6b9, github.com/apache/spark/pull/6683 |
| |
| [SPARK-8573] [SPARK-8568] [SQL] [PYSPARK] raise Exception if column is used in booelan expression |
| Davies Liu <davies@databricks.com> |
| 2015-06-23 15:51:16 -0700 |
| Commit: 7fb5ae5, github.com/apache/spark/pull/6961 |
| |
| [DOC] [SQL] Addes Hive metastore Parquet table conversion section |
| Cheng Lian <lian@databricks.com> |
| 2015-06-23 14:19:21 -0700 |
| Commit: d96d7b5, github.com/apache/spark/pull/5348 |
| |
| [SPARK-8525] [MLLIB] fix LabeledPoint parser when there is a whitespace between label and features vector |
| Oleksiy Dyagilev <oleksiy_dyagilev@epam.com> |
| 2015-06-23 13:12:19 -0700 |
| Commit: a803118, github.com/apache/spark/pull/6954 |
| |
| [SPARK-8111] [SPARKR] SparkR shell should display Spark logo and version banner on startup. |
| Alok Singh <singhal@Aloks-MacBook-Pro.local>, Alok Singh <singhal@aloks-mbp.usca.ibm.com> |
| 2015-06-23 12:47:55 -0700 |
| Commit: f2fb028, github.com/apache/spark/pull/6944 |
| |
| [SPARK-8265] [MLLIB] [PYSPARK] Add LinearDataGenerator to pyspark.mllib.utils |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-23 12:43:32 -0700 |
| Commit: f2022fa, github.com/apache/spark/pull/6715 |
| |
| [SPARK-7888] Be able to disable intercept in linear regression in ml package |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-06-23 12:42:17 -0700 |
| Commit: 2b1111d, github.com/apache/spark/pull/6927 |
| |
| [SPARK-8432] [SQL] fix hashCode() and equals() of BinaryType in Row |
| Davies Liu <davies@databricks.com> |
| 2015-06-23 11:55:47 -0700 |
| Commit: 6f4cadf, github.com/apache/spark/pull/6876 |
| |
| [SPARK-7235] [SQL] Refactor the grouping sets |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-23 10:52:17 -0700 |
| Commit: 7b1450b, github.com/apache/spark/pull/5780 |
| |
| [SQL] [DOCS] updated the documentation for explode |
| lockwobr <lockwobr@gmail.com> |
| 2015-06-24 02:48:56 +0900 |
| Commit: 4f7fbef, github.com/apache/spark/pull/6943 |
| |
| [SPARK-8498] [TUNGSTEN] fix npe in errorhandling path in unsafeshuffle writer |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-06-23 09:08:11 -0700 |
| Commit: 0f92be5, github.com/apache/spark/pull/6918 |
| |
| [SPARK-8300] DataFrame hint for broadcast join. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-23 01:50:31 -0700 |
| Commit: 6ceb169, github.com/apache/spark/pull/6751 |
| |
| [SPARK-8541] [PYSPARK] test the absolute error in approx doctests |
| Scott Taylor <github@megatron.me.uk> |
| 2015-06-22 23:37:56 -0700 |
| Commit: f0dcbe8, github.com/apache/spark/pull/6942 |
| |
| [SPARK-8483] [STREAMING] Remove commons-lang3 dependency from Flume Siā¦ |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-22 23:34:17 -0700 |
| Commit: 9b618fb, github.com/apache/spark/pull/6910 |
| |
| [SPARK-8359] [SQL] Fix incorrect decimal precision after multiplication |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-22 23:11:56 -0700 |
| Commit: 31bd306, github.com/apache/spark/pull/6814 |
| |
| [SPARK-8431] [SPARKR] Add in operator to DataFrame Column in SparkR |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-22 23:04:36 -0700 |
| Commit: d4f6335, github.com/apache/spark/pull/6941 |
| |
| [SPARK-7781] [MLLIB] gradient boosted trees.train regressor missing max bins |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-06-22 22:40:19 -0700 |
| Commit: 164fe2a, github.com/apache/spark/pull/6331 |
| |
| [SPARK-8548] [SPARKR] Remove the trailing whitespaces from the SparkR files |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-22 20:55:38 -0700 |
| Commit: 44fa7df, github.com/apache/spark/pull/6945 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-22 20:25:32 -0700 |
| Commit: c4d2343, github.com/apache/spark/pull/2849 |
| |
| [SPARK-7859] [SQL] Collect_set() behavior differences which fails the unit test under jdk8 |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-22 20:04:49 -0700 |
| Commit: 13321e6, github.com/apache/spark/pull/6402 |
| |
| [SPARK-8307] [SQL] improve timestamp from parquet |
| Davies Liu <davies@databricks.com> |
| 2015-06-22 18:03:59 -0700 |
| Commit: 6b7f2ce, github.com/apache/spark/pull/6759 |
| |
| [SPARK-7153] [SQL] support all integral type ordinal in GetArrayItem |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-22 17:37:35 -0700 |
| Commit: 860a49e, github.com/apache/spark/pull/5706 |
| |
| [HOTFIX] [TESTS] Typo mqqt -> mqtt |
| Andrew Or <andrew@databricks.com> |
| 2015-06-22 16:16:26 -0700 |
| Commit: 1dfb0f7 |
| |
| [SPARK-8492] [SQL] support binaryType in UnsafeRow |
| Davies Liu <davies@databricks.com> |
| 2015-06-22 15:22:17 -0700 |
| Commit: 96aa013, github.com/apache/spark/pull/6911 |
| |
| [SPARK-8356] [SQL] Reconcile callUDF and callUdf |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-06-22 15:06:47 -0700 |
| Commit: 50d3242, github.com/apache/spark/pull/6902 |
| |
| [SPARK-8537] [SPARKR] Add a validation rule about the curly braces in SparkR to `.lintr` |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-22 14:35:38 -0700 |
| Commit: b1f3a48, github.com/apache/spark/pull/6940 |
| |
| [SPARK-8455] [ML] Implement n-gram feature transformer |
| Feynman Liang <fliang@databricks.com> |
| 2015-06-22 14:15:35 -0700 |
| Commit: afe35f0, github.com/apache/spark/pull/6887 |
| |
| [SPARK-8532] [SQL] In Python's DataFrameWriter, save/saveAsTable/json/parquet/jdbc always override mode |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-22 13:51:23 -0700 |
| Commit: 5ab9fcf, github.com/apache/spark/pull/6937 |
| |
| [SPARK-8104] [SQL] auto alias expressions in analyzer |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-22 12:13:00 -0700 |
| Commit: da7bbb9, github.com/apache/spark/pull/6647 |
| |
| [SPARK-8511] [PYSPARK] Modify a test to remove a saved model in `regression.py` |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-22 11:53:11 -0700 |
| Commit: 5d89d9f, github.com/apache/spark/pull/6926 |
| |
| [SPARK-8482] Added M4 instances to the list. |
| Pradeep Chhetri <pradeep.chhetri89@gmail.com> |
| 2015-06-22 11:45:31 -0700 |
| Commit: ba8a453, github.com/apache/spark/pull/6899 |
| |
| [SPARK-8429] [EC2] Add ability to set additional tags |
| Stefano Parmesan <s.parmesan@gmail.com> |
| 2015-06-22 11:43:10 -0700 |
| Commit: 42a1f71, github.com/apache/spark/pull/6857 |
| |
| [SPARK-8406] [SQL] Adding UUID to output file name to avoid accidental overwriting |
| Cheng Lian <lian@databricks.com> |
| 2015-06-22 10:03:57 -0700 |
| Commit: 0818fde, github.com/apache/spark/pull/6864 |
| |
| [SPARK-7426] [MLLIB] [ML] Updated Attribute.fromStructField to allow any NumericType. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-06-21 18:25:36 -0700 |
| Commit: 47c1d56, github.com/apache/spark/pull/6540 |
| |
| [SPARK-7715] [MLLIB] [ML] [DOC] Updated MLlib programming guide for release 1.4 |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-06-21 16:25:25 -0700 |
| Commit: a189442, github.com/apache/spark/pull/6897 |
| |
| [SPARK-8508] [SQL] Ignores a test case to cleanup unnecessary testing output until #6882 is merged |
| Cheng Lian <lian@databricks.com> |
| 2015-06-21 13:20:28 -0700 |
| Commit: 83cdfd8, github.com/apache/spark/pull/6925 |
| |
| [SPARK-7604] [MLLIB] Python API for PCA and PCAModel |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-21 12:04:20 -0700 |
| Commit: 32e3cda, github.com/apache/spark/pull/6315 |
| |
| [SPARK-8379] [SQL] avoid speculative tasks write to the same file |
| jeanlyn <jeanlyn92@gmail.com> |
| 2015-06-21 00:13:40 -0700 |
| Commit: a1e3649, github.com/apache/spark/pull/6833 |
| |
| [SPARK-8301] [SQL] Improve UTF8String substring/startsWith/endsWith/contains performance |
| Tarek Auel <tarek.auel@googlemail.com>, Tarek Auel <tarek.auel@gmail.com> |
| 2015-06-20 20:03:59 -0700 |
| Commit: 41ab285, github.com/apache/spark/pull/6804 |
| |
| [SPARK-8495] [SPARKR] Add a `.lintr` file to validate the SparkR files and the `lint-r` script |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-20 16:10:14 -0700 |
| Commit: 004f573, github.com/apache/spark/pull/6922 |
| |
| [SPARK-8422] [BUILD] [PROJECT INFRA] Add a module abstraction to dev/run-tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-20 16:04:45 -0700 |
| Commit: 7a3c424, github.com/apache/spark/pull/6866 |
| |
| [SPARK-8468] [ML] Take the negative of some metrics in RegressionEvaluator to get correct cross validation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-20 13:01:59 -0700 |
| Commit: 0b89951, github.com/apache/spark/pull/6905 |
| |
| [SPARK-8127] [STREAMING] [KAFKA] KafkaRDD optimize count() take() isEmpty() |
| cody koeninger <cody@koeninger.org> |
| 2015-06-19 18:54:07 -0700 |
| Commit: 1b6fe9b, github.com/apache/spark/pull/6632 |
| |
| [HOTFIX] [SPARK-8489] Correct JIRA number in previous commit |
| Andrew Or <andrew@databricks.com> |
| 2015-06-19 17:39:26 -0700 |
| Commit: bec40e5 |
| |
| [SPARK-8498] [SQL] Add regression test for SPARK-8470 |
| Andrew Or <andrew@databricks.com> |
| 2015-06-19 17:34:09 -0700 |
| Commit: 093c348, github.com/apache/spark/pull/6909 |
| |
| [SPARK-8390] [STREAMING] [KAFKA] fix docs related to HasOffsetRanges |
| cody koeninger <cody@koeninger.org> |
| 2015-06-19 17:16:56 -0700 |
| Commit: b305e37, github.com/apache/spark/pull/6863 |
| |
| [SPARK-8420] [SQL] Fix comparision of timestamps/dates with strings |
| Michael Armbrust <michael@databricks.com> |
| 2015-06-19 16:54:51 -0700 |
| Commit: a333a72, github.com/apache/spark/pull/6888 |
| |
| [SPARK-8093] [SQL] Remove empty structs inferred from JSON documents |
| Nathan Howell <nhowell@godaddy.com> |
| 2015-06-19 16:19:28 -0700 |
| Commit: 9814b97, github.com/apache/spark/pull/6799 |
| |
| [SPARK-8452] [SPARKR] expose jobGroup API in SparkR |
| Hossein <hossein@databricks.com> |
| 2015-06-19 15:47:22 -0700 |
| Commit: 1fa29c2, github.com/apache/spark/pull/6889 |
| |
| [SPARK-4118] [MLLIB] [PYSPARK] Python bindings for StreamingKMeans |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-19 12:23:15 -0700 |
| Commit: 54976e5, github.com/apache/spark/pull/6499 |
| |
| [SPARK-8461] [SQL] fix codegen with REPL class loader |
| Davies Liu <davies@databricks.com> |
| 2015-06-19 11:40:04 -0700 |
| Commit: e41e2fd, github.com/apache/spark/pull/6898 |
| |
| [HOTFIX] Fix scala style in DFSReadWriteTest that causes tests failed |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-19 11:36:59 -0700 |
| Commit: 4a462c2, github.com/apache/spark/pull/6907 |
| |
| [SPARK-8368] [SPARK-8058] [SQL] HiveContext may override the context class loader of the current thread |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-19 11:11:58 -0700 |
| Commit: c5876e5, github.com/apache/spark/pull/6891 |
| |
| [SPARK-5836] [DOCS] [STREAMING] Clarify what may cause long-running Spark apps to preserve shuffle files |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-19 11:03:04 -0700 |
| Commit: 4be53d0, github.com/apache/spark/pull/6901 |
| |
| [SPARK-8451] [SPARK-7287] SparkSubmitSuite should check exit code |
| Andrew Or <andrew@databricks.com> |
| 2015-06-19 10:56:19 -0700 |
| Commit: 68a2dca, github.com/apache/spark/pull/6886 |
| |
| [SPARK-7180] [SPARK-8090] [SPARK-8091] Fix a number of SerializationDebugger bugs and limitations |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-06-19 10:52:30 -0700 |
| Commit: 866816e, github.com/apache/spark/pull/6625 |
| |
| Add example that reads a local file, writes to a DFS path provided by th... |
| RJ Nowling <rnowling@gmail.com> |
| 2015-06-19 10:50:44 -0700 |
| Commit: a985803, github.com/apache/spark/pull/3347 |
| |
| [SPARK-8234][SQL] misc function: md5 |
| Shilei <shilei.qian@intel.com> |
| 2015-06-19 10:49:27 -0700 |
| Commit: 0c32fc1, github.com/apache/spark/pull/6779 |
| |
| [SPARK-8476] [CORE] Setters inc/decDiskBytesSpilled in TaskMetrics should also be private. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2015-06-19 10:48:16 -0700 |
| Commit: fe08561, github.com/apache/spark/pull/6896 |
| |
| [SPARK-8430] ExternalShuffleBlockResolver of shuffle service should support UnsafeShuffleManager |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-06-19 10:47:07 -0700 |
| Commit: 9baf093, github.com/apache/spark/pull/6873 |
| |
| [SPARK-8207] [SQL] Add math function bin |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-19 10:09:31 -0700 |
| Commit: 2c59d5c, github.com/apache/spark/pull/6721 |
| |
| [SPARK-8151] [MLLIB] pipeline components should correctly implement copy |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-19 09:46:51 -0700 |
| Commit: 43c7ec6, github.com/apache/spark/pull/6622 |
| |
| [SPARK-8389] [STREAMING] [KAFKA] Example of getting offset ranges out oā¦ |
| cody koeninger <cody@koeninger.org> |
| 2015-06-19 14:51:19 +0200 |
| Commit: 47af7c1, github.com/apache/spark/pull/6846 |
| |
| [SPARK-7265] Improving documentation for Spark SQL Hive support |
| Jihong MA <linlin200605@gmail.com> |
| 2015-06-19 14:05:11 +0200 |
| Commit: ebd363a, github.com/apache/spark/pull/5933 |
| |
| [SPARK-7913] [CORE] Make AppendOnlyMap use the same growth strategy of OpenHashSet and consistent exception message |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-19 11:58:07 +0200 |
| Commit: 93360dc, github.com/apache/spark/pull/6879 |
| |
| [SPARK-8387] [FOLLOWUP ] [WEBUI] Update driver log URL to show only 4096 bytes |
| Carson Wang <carson.wang@intel.com> |
| 2015-06-19 09:57:12 +0200 |
| Commit: 54557f3, github.com/apache/spark/pull/6878 |
| |
| [SPARK-8339] [PYSPARK] integer division for python 3 |
| Kevin Conor <kevin@discoverybayconsulting.com> |
| 2015-06-19 00:12:20 -0700 |
| Commit: fdf63f1, github.com/apache/spark/pull/6794 |
| |
| [SPARK-8444] [STREAMING] Adding Python streaming example for queueStream |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-06-19 00:07:53 -0700 |
| Commit: a2016b4, github.com/apache/spark/pull/6884 |
| |
| [SPARK-8348][SQL] Add in operator to DataFrame Column |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-06-18 23:13:05 -0700 |
| Commit: 754929b, github.com/apache/spark/pull/6824 |
| |
| [SPARK-8458] [SQL] Don't strip scheme part of output path when writing ORC files |
| Cheng Lian <lian@databricks.com> |
| 2015-06-18 22:01:52 -0700 |
| Commit: a71cbbd, github.com/apache/spark/pull/6892 |
| |
| [SPARK-8080] [STREAMING] Receiver.store with Iterator does not give correct count at Spark UI |
| Dibyendu Bhattacharya <dibyendu.bhattacharya1@pearson.com>, U-PEROOT\UBHATD1 <UBHATD1@PIN-L-PI046.PEROOT.com> |
| 2015-06-18 19:58:47 -0700 |
| Commit: 3eaed87, github.com/apache/spark/pull/6707 |
| |
| [SPARK-8462] [DOCS] Documentation fixes for Spark SQL |
| Lars Francke <lars.francke@gmail.com> |
| 2015-06-18 19:40:32 -0700 |
| Commit: 4ce3bab, github.com/apache/spark/pull/6890 |
| |
| [SPARK-8135] Don't load defaults when reconstituting Hadoop Configurations |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-06-18 19:36:05 -0700 |
| Commit: 43f50de, github.com/apache/spark/pull/6679 |
| |
| [SPARK-8218][SQL] Binary log math function update. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-18 18:41:15 -0700 |
| Commit: dc41313, github.com/apache/spark/pull/6871 |
| |
| [SPARK-8446] [SQL] Add helper functions for testing SparkPlan physical operators |
| Josh Rosen <joshrosen@databricks.com>, Josh Rosen <rosenville@gmail.com>, Michael Armbrust <michael@databricks.com> |
| 2015-06-18 16:45:14 -0700 |
| Commit: 207a98c, github.com/apache/spark/pull/6885 |
| |
| [SPARK-8376] [DOCS] Add common lang3 to the Spark Flume Sink doc |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-18 16:00:27 -0700 |
| Commit: 24e5379, github.com/apache/spark/pull/6829 |
| |
| [SPARK-8353] [DOCS] Show anchor links when hovering over documentation headers |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-18 15:10:09 -0700 |
| Commit: 44c931f, github.com/apache/spark/pull/6808 |
| |
| [SPARK-8202] [PYSPARK] fix infinite loop during external sort in PySpark |
| Davies Liu <davies@databricks.com> |
| 2015-06-18 13:45:58 -0700 |
| Commit: 9b20027, github.com/apache/spark/pull/6714 |
| |
| [SPARK-8363][SQL] Move sqrt to math and extend UnaryMathExpression |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-18 13:00:31 -0700 |
| Commit: 3164112, github.com/apache/spark/pull/6823 |
| |
| [SPARK-8320] [STREAMING] Add example in streaming programming guide that shows union of multiple input streams |
| Neelesh Srinivas Salian <nsalian@cloudera.com> |
| 2015-06-18 09:44:36 -0700 |
| Commit: ddc5baf, github.com/apache/spark/pull/6862 |
| |
| [SPARK-8283][SQL] Resolve udf_struct test failure in HiveCompatibilitySuite |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-06-17 23:46:57 -0700 |
| Commit: e86fbdb, github.com/apache/spark/pull/6828 |
| |
| [SPARK-8218][SQL] Add binary log math function |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-17 23:31:30 -0700 |
| Commit: fee3438, github.com/apache/spark/pull/6725 |
| |
| [SPARK-7961][SQL]Refactor SQLConf to display better error message |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-17 23:22:54 -0700 |
| Commit: 78a430e, github.com/apache/spark/pull/6747 |
| |
| [SPARK-8381][SQL]reuse typeConvert when convert Seq[Row] to catalyst type |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-06-17 22:52:47 -0700 |
| Commit: 9db73ec, github.com/apache/spark/pull/6831 |
| |
| [SPARK-8095] Resolve dependencies of --packages in local ivy cache |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-06-17 22:33:37 -0700 |
| Commit: 3b61077, github.com/apache/spark/pull/6788 |
| |
| [SPARK-8392] RDDOperationGraph: getting cached nodes is slow |
| xutingjun <xutingjun@huawei.com> |
| 2015-06-17 22:31:01 -0700 |
| Commit: e2cdb05, github.com/apache/spark/pull/6839 |
| |
| [SPARK-7605] [MLLIB] [PYSPARK] Python API for ElementwiseProduct |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-17 22:08:38 -0700 |
| Commit: 22732e1, github.com/apache/spark/pull/6346 |
| |
| [SPARK-8373] [PYSPARK] Remove PythonRDD.emptyRDD |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-17 22:07:16 -0700 |
| Commit: 4817ccd, github.com/apache/spark/pull/6867 |
| |
| [HOTFIX] [PROJECT-INFRA] Fix bug in dev/run-tests for MLlib-only PRs |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-17 19:02:25 -0700 |
| Commit: 165f52f |
| |
| [SPARK-8397] [SQL] Allow custom configuration for TestHive |
| Punya Biswal <pbiswal@palantir.com> |
| 2015-06-17 15:29:39 -0700 |
| Commit: d1069cb, github.com/apache/spark/pull/6844 |
| |
| [SPARK-8404] [STREAMING] [TESTS] Use thread-safe collections to make the tests more reliable |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-17 15:00:03 -0700 |
| Commit: a06d9c8, github.com/apache/spark/pull/6852 |
| |
| [SPARK-8306] [SQL] AddJar command needs to set the new class loader to the HiveConf inside executionHive.state. |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-17 14:52:43 -0700 |
| Commit: 302556f, github.com/apache/spark/pull/6758 |
| |
| [SPARK-7067] [SQL] fix bug when use complex nested fields in ORDER BY |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-17 14:46:00 -0700 |
| Commit: 7f05b1f, github.com/apache/spark/pull/5659 |
| |
| [SPARK-7913] [CORE] Increase the maximum capacity of PartitionedPairBuffe, PartitionedSerializedPairBuffer and AppendOnlyMap |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-17 14:03:15 -0700 |
| Commit: a411a40, github.com/apache/spark/pull/6456 |
| |
| [SPARK-8373] [PYSPARK] Add emptyRDD to pyspark and fix the issue when calling sum on an empty RDD |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-17 13:59:39 -0700 |
| Commit: 0fc4b96, github.com/apache/spark/pull/6826 |
| |
| [SPARK-8372] History server shows incorrect information for application not started |
| Carson Wang <carson.wang@intel.com> |
| 2015-06-17 13:41:36 -0700 |
| Commit: 2837e06, github.com/apache/spark/pull/6827 |
| |
| [SPARK-8161] Set externalBlockStoreInitialized to be true, after ExternalBlockStore is initialized |
| Mingfei <mingfei.shi@intel.com> |
| 2015-06-17 13:40:07 -0700 |
| Commit: 7ad8c5d, github.com/apache/spark/pull/6702 |
| |
| [SPARK-8010] [SQL] Promote types to StringType as implicit conversion in non-binary expression of HiveTypeCoercion |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-06-17 13:37:59 -0700 |
| Commit: 98ee351, github.com/apache/spark/pull/6551 |
| |
| [SPARK-6782] add sbt-revolver plugin |
| Imran Rashid <irashid@cloudera.com> |
| 2015-06-17 13:34:26 -0700 |
| Commit: a465944, github.com/apache/spark/pull/5426 |
| |
| [SPARK-8395] [DOCS] start-slave.sh docs incorrect |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-17 13:31:10 -0700 |
| Commit: f005be0, github.com/apache/spark/pull/6855 |
| |
| [SPARK-8077] [SQL] Optimization for TreeNodes with large numbers of children |
| Michael Davies <Michael.BellDavies@gmail.com> |
| 2015-06-17 12:56:55 -0700 |
| Commit: 0c1b2df, github.com/apache/spark/pull/6673 |
| |
| [SPARK-7017] [BUILD] [PROJECT INFRA] Refactor dev/run-tests into Python |
| Brennon York <brennon.york@capitalone.com> |
| 2015-06-17 12:00:34 -0700 |
| Commit: 50a0496, github.com/apache/spark/pull/5694 |
| |
| [SPARK-6390] [SQL] [MLlib] Port MatrixUDT to PySpark |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-17 11:10:16 -0700 |
| Commit: 6765ef9, github.com/apache/spark/pull/6354 |
| |
| [SPARK-7199] [SQL] Add date and timestamp support to UnsafeRow |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-17 09:00:37 -0700 |
| Commit: 104f30c, github.com/apache/spark/pull/5984 |
| |
| [SPARK-8309] [CORE] Support for more than 12M items in OpenHashMap |
| Vyacheslav Baranov <slavik.baranov@gmail.com> |
| 2015-06-17 09:42:29 +0100 |
| Commit: c13da20, github.com/apache/spark/pull/6763 |
| |
| Closes #6850. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-17 00:28:40 -0700 |
| Commit: e3de14d |
| |
| [SPARK-8220][SQL]Add positive identify function |
| dragonli <lisurprise@gmail.com>, zhichao.li <zhichao.li@intel.com> |
| 2015-06-16 23:44:10 -0700 |
| Commit: bedff7d, github.com/apache/spark/pull/6838 |
| |
| [SPARK-8156] [SQL] create table to specific database by 'use dbname' |
| baishuo <vc_java@hotmail.com> |
| 2015-06-16 16:40:02 -0700 |
| Commit: 0b8c8fd, github.com/apache/spark/pull/6695 |
| |
| [SPARK-7916] [MLLIB] MLlib Python doc parity check for classification and regression |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-06-16 14:30:30 -0700 |
| Commit: ca99875, github.com/apache/spark/pull/6460 |
| |
| [SPARK-8126] [BUILD] Make sure temp dir exists when running tests. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-16 21:10:18 +0100 |
| Commit: cebf241, github.com/apache/spark/pull/6805 |
| |
| [SQL] [DOC] improved a comment |
| Radek Ostrowski <dest.hawaii@gmail.com>, radek <radek@radeks-MacBook-Pro-2.local> |
| 2015-06-16 21:04:26 +0100 |
| Commit: 4bd10fd, github.com/apache/spark/pull/6332 |
| |
| [SPARK-DOCS] [SPARK-SQL] Update sql-programming-guide.md |
| Moussa Taifi <moutai10@gmail.com> |
| 2015-06-16 20:59:22 +0100 |
| Commit: dc455b8, github.com/apache/spark/pull/6847 |
| |
| [SPARK-8387] [WEBUI] Only show 4096 bytes content for executor log instead of show all |
| hushan[č”ē] <hushan@xiaomi.com> |
| 2015-06-16 20:48:33 +0100 |
| Commit: 29c5025, github.com/apache/spark/pull/6834 |
| |
| [SPARK-8129] [CORE] [Sec] Pass auth secrets to executors via env variables |
| Kan Zhang <kzhang@apache.org> |
| 2015-06-16 08:18:26 +0200 |
| Commit: 658814c, github.com/apache/spark/pull/6774 |
| |
| [SPARK-8367] [STREAMING] Add a limit for 'spark.streaming.blockInterval` since a data loss bug. |
| huangzhaowei <carlmartinmax@gmail.com>, huangzhaowei <SaintBacchus@users.noreply.github.com> |
| 2015-06-16 08:16:09 +0200 |
| Commit: ccf010f, github.com/apache/spark/pull/6818 |
| |
| [SPARK-7184] [SQL] enable codegen by default |
| Davies Liu <davies@databricks.com> |
| 2015-06-15 23:03:14 -0700 |
| Commit: bc76a0f, github.com/apache/spark/pull/6726 |
| |
| SPARK-8336 Fix NullPointerException with functions.rand() |
| tedyu <yuzhihong@gmail.com> |
| 2015-06-15 17:00:38 -0700 |
| Commit: 1a62d61, github.com/apache/spark/pull/6793 |
| |
| [SPARK-6583] [SQL] Support aggregate functions in ORDER BY |
| Yadong Qi <qiyadong2010@gmail.com>, Michael Armbrust <michael@databricks.com> |
| 2015-06-15 12:01:52 -0700 |
| Commit: 6ae21a9, github.com/apache/spark/pull/on |
| |
| [SPARK-8350] [R] Log R unit test output to "unit-tests.log" |
| andrewor14 <andrew@databricks.com>, Andrew Or <andrew@databricks.com> |
| 2015-06-15 08:16:22 -0700 |
| Commit: 56d4e8a, github.com/apache/spark/pull/6807 |
| |
| [SPARK-8316] Upgrade to Maven 3.3.3 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-06-15 08:18:01 +0100 |
| Commit: 4c5889e, github.com/apache/spark/pull/6770 |
| |
| [SPARK-8065] [SQL] Add support for Hive 0.14 metastores |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-14 11:49:16 -0700 |
| Commit: 4eb48ed, github.com/apache/spark/pull/6627 |
| |
| fix read/write mixup |
| Peter Hoffmann <ph@peter-hoffmann.com> |
| 2015-06-14 11:41:16 -0700 |
| Commit: f3f2a43, github.com/apache/spark/pull/6815 |
| |
| [SPARK-8362] [SQL] Add unit tests for +, -, *, /, % |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-14 11:23:23 -0700 |
| Commit: 53c16b9, github.com/apache/spark/pull/6813 |
| |
| [SPARK-8358] [SQL] Wait for child resolution when resolving generators |
| Michael Armbrust <michael@databricks.com> |
| 2015-06-14 11:21:42 -0700 |
| Commit: 9073a42, github.com/apache/spark/pull/6811 |
| |
| [SPARK-8354] [SQL] Fix off-by-factor-of-8 error when allocating scratch space in UnsafeFixedWidthAggregationMap |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-14 09:34:35 -0700 |
| Commit: ea7fd2f, github.com/apache/spark/pull/6809 |
| |
| [SPARK-8342][SQL] Fix Decimal setOrNull |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-13 22:42:28 -0700 |
| Commit: cb7ada1, github.com/apache/spark/pull/6797 |
| |
| [Spark-8343] [Streaming] [Docs] Improve Spark Streaming Guides. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-06-13 21:22:46 -0700 |
| Commit: 35d1267, github.com/apache/spark/pull/6801 |
| |
| [SPARK-8349] [SQL] Use expression constructors (rather than apply) in FunctionRegistry |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-13 18:22:17 -0700 |
| Commit: 2d71ba4, github.com/apache/spark/pull/6806 |
| |
| [SPARK-8347][SQL] Add unit tests for abs. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-13 17:10:13 -0700 |
| Commit: a138953, github.com/apache/spark/pull/6803 |
| |
| [SPARK-8052] [SQL] Use java.math.BigDecimal for casting String to Decimal instead of using toDouble |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-13 16:39:52 -0700 |
| Commit: ddec452, github.com/apache/spark/pull/6645 |
| |
| [SPARK-8319] [CORE] [SQL] Update logic related to key orderings in shuffle dependencies |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-13 16:14:24 -0700 |
| Commit: af31335, github.com/apache/spark/pull/6773 |
| |
| [SPARK-8346] [SQL] Use InternalRow instread of catalyst.InternalRow |
| Davies Liu <davies@databricks.com> |
| 2015-06-13 16:13:26 -0700 |
| Commit: ce1041c, github.com/apache/spark/pull/6802 |
| |
| [SPARK-7897] Improbe type for jdbc/"unsigned bigint" |
| Rene Treffer <treffer@measite.de> |
| 2015-06-13 11:58:22 -0700 |
| Commit: d986fb9, github.com/apache/spark/pull/6789 |
| |
| [SPARK-8329][SQL] Allow _ in DataSource options |
| Michael Armbrust <michael@databricks.com> |
| 2015-06-12 23:11:16 -0700 |
| Commit: 4aed66f, github.com/apache/spark/pull/6786 |
| |
| [SPARK-7186] [SQL] Decouple internal Row from external Row |
| Davies Liu <davies@databricks.com> |
| 2015-06-12 23:06:31 -0700 |
| Commit: d46f8e5, github.com/apache/spark/pull/6792 |
| |
| [SPARK-8314][MLlib] improvement in performance of MLUtils.appendBias |
| Roger Menezes <rmenezes@netflix.com> |
| 2015-06-12 18:29:58 -0700 |
| Commit: 6e9c3ff, github.com/apache/spark/pull/6768 |
| |
| [SPARK-7284] [STREAMING] Updated streaming documentation |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-06-12 15:22:59 -0700 |
| Commit: e9471d3, github.com/apache/spark/pull/6781 |
| |
| [SPARK-8330] DAG visualization: trim whitespace from input |
| Andrew Or <andrew@databricks.com> |
| 2015-06-12 11:14:55 -0700 |
| Commit: 8860405, github.com/apache/spark/pull/6787 |
| |
| [SPARK-7993] [SQL] Improved DataFrame.show() output |
| akhilthatipamula <130050068@iitb.ac.in>, zsxwing <zsxwing@gmail.com> |
| 2015-06-12 10:40:28 -0700 |
| Commit: 19834fa, github.com/apache/spark/pull/6633 |
| |
| [SPARK-8322] [EC2] Added spark 1.4.0 into the VALID_SPARK_VERSIONS andā¦ |
| Mark Smith <mark.smith@bronto.com> |
| 2015-06-12 08:19:03 -0700 |
| Commit: 71cc17b, github.com/apache/spark/pull/6776 |
| |
| [SQL] [MINOR] correct semanticEquals logic |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-12 16:38:28 +0800 |
| Commit: c19c785, github.com/apache/spark/pull/6261 |
| |
| [SPARK-6566] [SQL] Related changes for newer parquet version |
| Yash Datta <Yash.Datta@guavus.com> |
| 2015-06-12 13:44:09 +0800 |
| Commit: e428b3a, github.com/apache/spark/pull/5889 |
| |
| [SPARK-7862] [SQL] Fix the deadlock in script transformation for stderr |
| zhichao.li <zhichao.li@intel.com> |
| 2015-06-11 22:28:28 -0700 |
| Commit: 2dd7f93, github.com/apache/spark/pull/6404 |
| |
| [SPARK-8317] [SQL] Do not push sort into shuffle in Exchange operator |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-11 22:15:15 -0700 |
| Commit: b9d177c, github.com/apache/spark/pull/6772 |
| |
| [SPARK-7158] [SQL] Fix bug of cached data cannot be used in collect() after cache() |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-11 18:01:32 -0700 |
| Commit: 767cc94, github.com/apache/spark/pull/5714 |
| |
| [SQL] Miscellaneous SQL/DF expression changes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-11 17:06:21 -0700 |
| Commit: 337c16d, github.com/apache/spark/pull/6754 |
| |
| [SPARK-7824] [SQL] Collapse operator reordering and constant folding into a single batch. |
| Zhongshuai Pei <799203320@qq.com>, DoingDone9 <799203320@qq.com> |
| 2015-06-11 17:01:02 -0700 |
| Commit: 7914c72, github.com/apache/spark/pull/6351 |
| |
| [SPARK-8286] Rewrite UTF8String in Java and move it into unsafe package. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-11 16:07:15 -0700 |
| Commit: 7d669a5, github.com/apache/spark/pull/6738 |
| |
| [SPARK-6511] [docs] Fix example command in hadoop-provided docs. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-11 15:29:03 -0700 |
| Commit: 9cbdf31, github.com/apache/spark/pull/6766 |
| |
| [SPARK-7444] [TESTS] Eliminate noisy css warn/error logs for UISeleniumSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-11 14:21:49 -0700 |
| Commit: 95690a1, github.com/apache/spark/pull/5983 |
| |
| [SPARK-7915] [SQL] Support specifying the column list for target table in CTAS |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-11 14:03:08 -0700 |
| Commit: 040f223, github.com/apache/spark/pull/6458 |
| |
| [SPARK-8310] [EC2] Updates the master branch EC2 versions |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-11 13:18:42 -0700 |
| Commit: c8d551d, github.com/apache/spark/pull/6764 |
| |
| [SPARK-8305] [SPARK-8190] [SQL] improve codegen |
| Davies Liu <davies@databricks.com> |
| 2015-06-11 12:57:33 -0700 |
| Commit: 1191c3e, github.com/apache/spark/pull/6755 |
| |
| [SPARK-6411] [SQL] [PySpark] support date/datetime with timezone in Python |
| Davies Liu <davies@databricks.com> |
| 2015-06-11 01:00:41 -0700 |
| Commit: 424b007, github.com/apache/spark/pull/6250 |
| |
| [SPARK-8289] Specify stack size for consistency with Java tests - resolves test failures |
| Adam Roberts <aroberts@uk.ibm.com>, a-roberts <aroberts@uk.ibm.com> |
| 2015-06-11 08:40:46 +0100 |
| Commit: 6b68366, github.com/apache/spark/pull/6727 |
| |
| [HOTFIX] Fixing errors in name mappings |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-10 22:56:36 -0700 |
| Commit: e84545f |
| |
| [HOTFIX] Adding more contributor name bindings |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-10 21:13:47 -0700 |
| Commit: a777eb0 |
| |
| [SPARK-8217] [SQL] math function log2 |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-06-10 20:22:32 -0700 |
| Commit: 2758ff0, github.com/apache/spark/pull/6718 |
| |
| [SPARK-8248][SQL] string function: length |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-06-10 19:55:10 -0700 |
| Commit: 9fe3adc, github.com/apache/spark/pull/6724 |
| |
| [SPARK-8164] transformExpressions should support nested expression sequence |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-10 18:22:47 -0700 |
| Commit: 4e42842, github.com/apache/spark/pull/6706 |
| |
| [SPARK-8285] [SQL] CombineSum should be calculated as unlimited decimal first |
| navis.ryu <navis@apache.org> |
| 2015-06-10 18:19:12 -0700 |
| Commit: 6a47114, github.com/apache/spark/pull/6736 |
| |
| [SPARK-8189] [SQL] use Long for TimestampType in SQL |
| Davies Liu <davies@databricks.com> |
| 2015-06-10 16:55:39 -0700 |
| Commit: 37719e0, github.com/apache/spark/pull/6733 |
| |
| [SPARK-8200] [MLLIB] Check for empty RDDs in StreamingLinearAlgorithm |
| Paavo <pparkkin@gmail.com> |
| 2015-06-10 23:17:42 +0100 |
| Commit: b928f54, github.com/apache/spark/pull/6713 |
| |
| [SPARK-2774] Set preferred locations for reduce tasks |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-10 15:03:40 -0700 |
| Commit: 96a7c88, github.com/apache/spark/pull/6652 |
| |
| [SPARK-8273] Driver hangs up when yarn shutdown in client mode |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-06-10 13:34:19 -0700 |
| Commit: 5014d0e, github.com/apache/spark/pull/6717 |
| |
| [SPARK-8290] spark class command builder need read SPARK_JAVA_OPTS and SPARK_DRIVER_MEMORY properly |
| WangTaoTheTonic <wangtao111@huawei.com>, Tao Wang <wangtao111@huawei.com> |
| 2015-06-10 13:30:16 -0700 |
| Commit: cb871c4, github.com/apache/spark/pull/6741 |
| |
| [SPARK-7261] [CORE] Change default log level to WARN in the REPL |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-10 13:25:59 -0700 |
| Commit: 80043e9, github.com/apache/spark/pull/6734 |
| |
| [SPARK-7527] [CORE] Fix createNullValue to return the correct null values and REPL mode detection |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-10 13:22:52 -0700 |
| Commit: e90c9d9, github.com/apache/spark/pull/6735 |
| |
| [SPARK-7756] CORE RDDOperationScope fix for IBM Java |
| Adam Roberts <aroberts@uk.ibm.com>, a-roberts <aroberts@uk.ibm.com> |
| 2015-06-10 13:21:01 -0700 |
| Commit: 19e30b4, github.com/apache/spark/pull/6740 |
| |
| [SPARK-8282] [SPARKR] Make number of threads used in RBackend configurable |
| Hossein <hossein@databricks.com> |
| 2015-06-10 13:18:48 -0700 |
| Commit: 30ebf1a, github.com/apache/spark/pull/6730 |
| |
| [SPARK-5479] [YARN] Handle --py-files correctly in YARN. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-10 13:17:29 -0700 |
| Commit: 3811290, github.com/apache/spark/pull/6360 |
| |
| [SQL] [MINOR] Fixes a minor Java example error in SQL programming guide |
| Cheng Lian <lian@databricks.com> |
| 2015-06-10 11:48:14 -0700 |
| Commit: 8f7308f, github.com/apache/spark/pull/6749 |
| |
| [SPARK-7996] Deprecate the developer api SparkEnv.actorSystem |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-06-10 11:21:12 -0700 |
| Commit: 2b550a5, github.com/apache/spark/pull/6731 |
| |
| [SPARK-8215] [SPARK-8212] [SQL] add leaf math expression for e and pi |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-06-10 09:45:45 -0700 |
| Commit: c6ba7cc, github.com/apache/spark/pull/6716 |
| |
| [SPARK-7886] Added unit test for HAVING aggregate pushdown. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-10 18:58:01 +0800 |
| Commit: e90035e, github.com/apache/spark/pull/6739 |
| |
| [SPARK-7886] Use FunctionRegistry for built-in expressions in HiveContext. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-10 00:36:16 -0700 |
| Commit: 57c60c5, github.com/apache/spark/pull/6712 |
| |
| [SPARK-7792] [SQL] HiveContext registerTempTable not thread safe |
| navis.ryu <navis@apache.org> |
| 2015-06-09 19:33:00 -0700 |
| Commit: 778f3ca, github.com/apache/spark/pull/6699 |
| |
| [SPARK-6511] [DOCUMENTATION] Explain how to use Hadoop provided builds |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-09 16:14:21 -0700 |
| Commit: 6e4fb0c, github.com/apache/spark/pull/6729 |
| |
| [MINOR] [UI] DAG visualization: trim whitespace from input |
| Andrew Or <andrew@databricks.com> |
| 2015-06-09 15:44:02 -0700 |
| Commit: 0d5892d, github.com/apache/spark/pull/6732 |
| |
| [SPARK-8274] [DOCUMENTATION-MLLIB] Fix wrong URLs in MLlib Frequent Pattern Mining Documentation |
| FavioVazquez <favio.vazquezp@gmail.com> |
| 2015-06-09 15:02:18 +0100 |
| Commit: 490d5a7, github.com/apache/spark/pull/6722 |
| |
| [SPARK-8140] [MLLIB] Remove construct to get weights in StreamingLinearAlgorithm |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-09 15:00:35 +0100 |
| Commit: 6c1723a, github.com/apache/spark/pull/6720 |
| |
| [STREAMING] [DOC] Remove duplicated description about WAL |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-06-09 12:19:01 +0100 |
| Commit: e6fb6ce, github.com/apache/spark/pull/6719 |
| |
| [SPARK-7886] Add built-in expressions to FunctionRegistry. |
| Reynold Xin <rxin@databricks.com>, Santiago M. Mola <santi@mola.io> |
| 2015-06-09 16:24:38 +0800 |
| Commit: 1b49999, github.com/apache/spark/pull/6710 |
| |
| [SPARK-8101] [CORE] Upgrade netty to avoid memory leak accord to netty #3837 issues |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-09 08:00:04 +0100 |
| Commit: 0902a11, github.com/apache/spark/pull/6701 |
| |
| [SPARK-7990][SQL] Add methods to facilitate equi-join on multiple joining keys |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-08 23:27:05 -0700 |
| Commit: 7658eb2, github.com/apache/spark/pull/6616 |
| |
| [SPARK-6820] [SPARKR] Convert NAs to null type in SparkR DataFrames |
| hqzizania <qian.huang@intel.com> |
| 2015-06-08 21:40:12 -0700 |
| Commit: a5c52c1, github.com/apache/spark/pull/6190 |
| |
| [SPARK-8168] [MLLIB] Add Python friendly constructor to PipelineModel |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-08 21:33:47 -0700 |
| Commit: 82870d5, github.com/apache/spark/pull/6709 |
| |
| [SPARK-8162] [HOTFIX] Fix NPE in spark-shell |
| Andrew Or <andrew@databricks.com> |
| 2015-06-08 18:09:21 -0700 |
| Commit: f3eec92, github.com/apache/spark/pull/6711 |
| |
| [SPARK-8148] Do not use FloatType in partition column inference. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-08 13:15:44 -0700 |
| Commit: 5185389, github.com/apache/spark/pull/6692 |
| |
| [SQL][minor] remove duplicated cases in `DecimalPrecision` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-08 11:52:02 -0700 |
| Commit: fe7669d, github.com/apache/spark/pull/6698 |
| |
| [SPARK-8121] [SQL] Fixes InsertIntoHadoopFsRelation job initialization for Hadoop 1.x |
| Cheng Lian <lian@databricks.com> |
| 2015-06-08 11:34:18 -0700 |
| Commit: bbdfc0a, github.com/apache/spark/pull/6669 |
| |
| [SPARK-8158] [SQL] several fix for HiveShim |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-06-08 11:06:27 -0700 |
| Commit: ed5c2dc, github.com/apache/spark/pull/6700 |
| |
| [MINOR] change new Exception to IllegalArgumentException |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-06-08 09:41:06 -0700 |
| Commit: 49f19b9, github.com/apache/spark/pull/6434 |
| |
| [SMALL FIX] Return null if catch EOFException |
| Mingfei <mingfei.shi@intel.com> |
| 2015-06-08 16:23:43 +0100 |
| Commit: 149d1b2, github.com/apache/spark/pull/6703 |
| |
| [SPARK-8140] [MLLIB] Remove empty model check in StreamingLinearAlgorithm |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-08 15:45:12 +0100 |
| Commit: e3e9c70, github.com/apache/spark/pull/6684 |
| |
| [SPARK-8126] [BUILD] Use custom temp directory during build. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-08 15:37:28 +0100 |
| Commit: a1d9e5c, github.com/apache/spark/pull/6674 |
| |
| [SPARK-7939] [SQL] Add conf to enable/disable partition column type inference |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-08 17:50:38 +0800 |
| Commit: 03ef6be, github.com/apache/spark/pull/6503 |
| |
| [SPARK-7705] [YARN] Cleanup of .sparkStaging directory fails if application is killed |
| linweizhong <linweizhong@huawei.com> |
| 2015-06-08 09:34:16 +0100 |
| Commit: eacd4a9, github.com/apache/spark/pull/6409 |
| |
| [SPARK-4761] [DOC] [SQL] kryo default setting in SQL Thrift server |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-06-08 01:07:50 -0700 |
| Commit: 10fc2f6, github.com/apache/spark/pull/6639 |
| |
| [SPARK-8154][SQL] Remove Term/Code type aliases in code generation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-07 23:16:19 -0700 |
| Commit: 72ba0fc, github.com/apache/spark/pull/6694 |
| |
| [SPARK-8149][SQL] Break ExpressionEvaluationSuite down to multiple files |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-07 18:45:24 -0700 |
| Commit: f74be74, github.com/apache/spark/pull/6693 |
| |
| [SPARK-8117] [SQL] Push codegen implementation into each Expression |
| Davies Liu <davies@databricks.com>, Reynold Xin <rxin@databricks.com> |
| 2015-06-07 14:11:20 -0700 |
| Commit: 5e7b6b6, github.com/apache/spark/pull/6690 |
| |
| [SPARK-2808] [STREAMING] [KAFKA] cleanup tests from |
| cody koeninger <cody@koeninger.org> |
| 2015-06-07 21:42:45 +0100 |
| Commit: b127ff8, github.com/apache/spark/pull/5921 |
| |
| [SPARK-7733] [CORE] [BUILD] Update build, code to use Java 7 for 1.5.0+ |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-07 20:18:13 +0100 |
| Commit: e84815d, github.com/apache/spark/pull/6265 |
| |
| [SPARK-7952][SQL] use internal Decimal instead of java.math.BigDecimal |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-07 11:07:19 -0700 |
| Commit: db81b9d, github.com/apache/spark/pull/6574 |
| |
| [SPARK-8004][SQL] Quote identifier in JDBC data source. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-07 10:52:02 -0700 |
| Commit: d6d601a, github.com/apache/spark/pull/6689 |
| |
| [DOC] [TYPO] Fix typo in standalone deploy scripts description |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-06-07 15:30:37 +0100 |
| Commit: 835f138, github.com/apache/spark/pull/6691 |
| |
| [SPARK-7042] [BUILD] use the standard akka artifacts with hadoop-2.x |
| Konstantin Shaposhnikov <Konstantin.Shaposhnikov@sc.com> |
| 2015-06-07 13:41:00 +0100 |
| Commit: ca8dafc, github.com/apache/spark/pull/6492 |
| |
| [SPARK-8118] [SQL] Mutes noisy Parquet log output reappeared after upgrading Parquet to 1.7.0 |
| Cheng Lian <lian@databricks.com> |
| 2015-06-07 16:59:55 +0800 |
| Commit: 8c321d6, github.com/apache/spark/pull/6670 |
| |
| [SPARK-8146] DataFrame Python API: Alias replace in df.na |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-07 01:21:02 -0700 |
| Commit: 0ac4708, github.com/apache/spark/pull/6688 |
| |
| [SPARK-8141] [SQL] Precompute datatypes for partition columns and reuse it |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-07 15:33:48 +0800 |
| Commit: 26d07f1, github.com/apache/spark/pull/6687 |
| |
| [SPARK-8145] [WEBUI] Trigger a double click on the span to show full job description. |
| 979969786 <q79969786@gmail.com> |
| 2015-06-06 23:15:27 -0700 |
| Commit: 081db94, github.com/apache/spark/pull/6646 |
| |
| [SPARK-8004][SQL] Enclose column names by JDBC Dialect |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-06-06 22:59:31 -0700 |
| Commit: 901a552, github.com/apache/spark/pull/6577 |
| |
| [SPARK-7955] [CORE] Ensure executors with cached RDD blocks are not reā¦ |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-06 21:13:26 -0700 |
| Commit: 3285a51, github.com/apache/spark/pull/6508 |
| |
| [SPARK-8136] [YARN] Fix flakiness in YarnClusterSuite. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-06 21:09:56 -0700 |
| Commit: ed2cc3e, github.com/apache/spark/pull/6680 |
| |
| [SPARK-7169] [CORE] Allow metrics system to be configured through SparkConf. |
| Marcelo Vanzin <vanzin@cloudera.com>, Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-06-06 21:08:36 -0700 |
| Commit: 18c4fce, github.com/apache/spark/pull/6560 |
| |
| [SPARK-7639] [PYSPARK] [MLLIB] Python API for KernelDensity |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-06 14:52:14 -0700 |
| Commit: 5aa804f, github.com/apache/spark/pull/6387 |
| |
| [SPARK-8079] [SQL] Makes InsertIntoHadoopFsRelation job/task abortion more robust |
| Cheng Lian <lian@databricks.com> |
| 2015-06-06 17:23:12 +0800 |
| Commit: 16fc496, github.com/apache/spark/pull/6612 |
| |
| [SPARK-6973] remove skipped stage ID from completed set on the allJobsPage |
| Xu Tingjun <xutingjun@huawei.com>, Xutingjun <xutingjun@huawei.com>, meiyoula <1039320815@qq.com> |
| 2015-06-06 09:53:53 +0100 |
| Commit: a8077e5, github.com/apache/spark/pull/5550 |
| |
| [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ round 3. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-05 23:15:10 -0700 |
| Commit: a71be0a, github.com/apache/spark/pull/6677 |
| |
| [SPARK-6964] [SQL] Support Cancellation in the Thrift Server |
| Dong Wang <dong@databricks.com> |
| 2015-06-05 17:41:12 -0700 |
| Commit: eb19d3f, github.com/apache/spark/pull/6207 |
| |
| [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ cont'd. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-05 13:57:21 -0700 |
| Commit: 6ebe419, github.com/apache/spark/pull/6667 |
| |
| [SPARK-7991] [PySpark] Adding support for passing lists to describe. |
| amey <amey@skytree.net> |
| 2015-06-05 13:49:33 -0700 |
| Commit: 356a4a9, github.com/apache/spark/pull/6655 |
| |
| [SPARK-7747] [SQL] [DOCS] spark.sql.planner.externalSort |
| Luca Martinetti <luca@luca.io> |
| 2015-06-05 13:40:11 -0700 |
| Commit: 4060526, github.com/apache/spark/pull/6272 |
| |
| [SPARK-8112] [STREAMING] Fix the negative event count issue |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-05 12:46:02 -0700 |
| Commit: 4f16d3f, github.com/apache/spark/pull/6659 |
| |
| [SPARK-7699] [CORE] Lazy start the scheduler for dynamic allocation |
| jerryshao <saisai.shao@intel.com> |
| 2015-06-05 12:28:37 -0700 |
| Commit: 3f80bc8, github.com/apache/spark/pull/6430 |
| |
| [SPARK-8099] set executor cores into system in yarn-cluster mode |
| Xutingjun <xutingjun@huawei.com>, xutingjun <xutingjun@huawei.com> |
| 2015-06-05 11:41:39 -0700 |
| Commit: 0992a0a, github.com/apache/spark/pull/6643 |
| |
| Revert "[MINOR] [BUILD] Use custom temp directory during build." |
| Andrew Or <andrew@databricks.com> |
| 2015-06-05 10:53:32 -0700 |
| Commit: 4036d05 |
| |
| [SPARK-8085] [SPARKR] Support user-specified schema in read.df |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-05 10:19:03 -0700 |
| Commit: 12f5eae, github.com/apache/spark/pull/6620 |
| |
| [SQL] Simplifies binary node pattern matching |
| Cheng Lian <lian@databricks.com> |
| 2015-06-05 23:06:19 +0800 |
| Commit: bc0d76a, github.com/apache/spark/pull/6537 |
| |
| [SPARK-6324] [CORE] Centralize handling of script usage messages. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-05 14:32:00 +0200 |
| Commit: 700312e, github.com/apache/spark/pull/5841 |
| |
| [STREAMING] Update streaming-kafka-integration.md |
| Akhil Das <akhld@darktech.ca> |
| 2015-06-05 14:23:23 +0200 |
| Commit: 019dc9f, github.com/apache/spark/pull/6666 |
| |
| [MINOR] [BUILD] Use custom temp directory during build. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-05 14:11:38 +0200 |
| Commit: b16b543, github.com/apache/spark/pull/6653 |
| |
| [MINOR] [BUILD] Change link to jenkins builds on github. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-05 10:32:33 +0200 |
| Commit: da20c8c, github.com/apache/spark/pull/6664 |
| |
| [MINOR] remove unused interpolation var in log message |
| Sean Owen <sowen@cloudera.com> |
| 2015-06-05 00:32:46 -0700 |
| Commit: 3a5c4da, github.com/apache/spark/pull/6650 |
| |
| [DOC][Minor]Specify the common sources available for collecting |
| Yijie Shen <henry.yijieshen@gmail.com> |
| 2015-06-05 07:45:25 +0200 |
| Commit: 2777ed3, github.com/apache/spark/pull/6641 |
| |
| [SPARK-8116][PYSPARK] Allow sc.range() to take a single argument. |
| Ted Blackman <ted.blackman@gmail.com> |
| 2015-06-04 22:21:11 -0700 |
| Commit: e505460, github.com/apache/spark/pull/6656 |
| |
| [SPARK-8114][SQL] Remove some wildcard import on TestSQLContext._ |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-04 22:15:58 -0700 |
| Commit: 8f16b94, github.com/apache/spark/pull/6661 |
| |
| [SPARK-8106] [SQL] Set derby.system.durability=test to speed up Hive compatibility tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-04 17:33:24 -0700 |
| Commit: 74dc2a9, github.com/apache/spark/pull/6651 |
| |
| [SPARK-8098] [WEBUI] Show correct length of bytes on log page |
| Carson Wang <carson.wang@intel.com> |
| 2015-06-04 16:24:50 -0700 |
| Commit: 63bc0c4, github.com/apache/spark/pull/6640 |
| |
| [SPARK-7440][SQL] Remove physical Distinct operator in favor of Aggregate |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-04 13:52:53 -0700 |
| Commit: 2bcdf8c, github.com/apache/spark/pull/6637 |
| |
| Fixed style issues for [SPARK-6909][SQL] Remove Hive Shim code. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-04 13:44:47 -0700 |
| Commit: 6593842 |
| |
| [SPARK-6909][SQL] Remove Hive Shim code |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-06-04 13:27:35 -0700 |
| Commit: 0526fea, github.com/apache/spark/pull/6604 |
| |
| [SPARK-8027] [SPARKR] Move man pages creation to install-dev.sh |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-04 12:52:16 -0700 |
| Commit: 3dc0052, github.com/apache/spark/pull/6593 |
| |
| [SPARK-7743] [SQL] Parquet 1.7 |
| Thomas Omans <tomans@cj.com> |
| 2015-06-04 11:32:03 -0700 |
| Commit: cd3176b, github.com/apache/spark/pull/6597 |
| |
| [SPARK-7969] [SQL] Added a DataFrame.drop function that accepts a Column reference. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-06-04 11:30:07 -0700 |
| Commit: df7da07, github.com/apache/spark/pull/6585 |
| |
| [SPARK-7956] [SQL] Use Janino to compile SQL expressions into bytecode |
| Davies Liu <davies@databricks.com> |
| 2015-06-04 10:28:59 -0700 |
| Commit: c8709dc, github.com/apache/spark/pull/6479 |
| |
| Fix maxTaskFailures comment |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2015-06-04 13:46:49 +0200 |
| Commit: 10ba188, github.com/apache/spark/pull/6621 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-03 23:45:06 -0700 |
| Commit: 9982d45, github.com/apache/spark/pull/5976 |
| |
| [BUILD] Fix Maven build for Kinesis |
| Andrew Or <andrew@databricks.com> |
| 2015-06-03 20:45:31 -0700 |
| Commit: 984ad60 |
| |
| [BUILD] Use right branch when checking against Hive |
| Andrew Or <andrew@databricks.com> |
| 2015-06-03 18:08:53 -0700 |
| Commit: 9cf740f, github.com/apache/spark/pull/6629 |
| |
| [BUILD] Increase Jenkins test timeout |
| Andrew Or <andrew@databricks.com> |
| 2015-06-03 17:40:14 -0700 |
| Commit: e35cd36 |
| |
| [SPARK-8084] [SPARKR] Make SparkR scripts fail on error |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-03 17:02:16 -0700 |
| Commit: 0576c3c, github.com/apache/spark/pull/6623 |
| |
| [SPARK-8088] don't attempt to lower number of executors by 0 |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-06-03 16:54:46 -0700 |
| Commit: 51898b5, github.com/apache/spark/pull/6624 |
| |
| [HOTFIX] History Server API docs error fix. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-03 16:53:57 -0700 |
| Commit: 566cb59, github.com/apache/spark/pull/6628 |
| |
| [HOTFIX] [TYPO] Fix typo in #6546 |
| Andrew Or <andrew@databricks.com> |
| 2015-06-03 16:04:02 -0700 |
| Commit: bfbdab1 |
| |
| [SPARK-6164] [ML] CrossValidatorModel should keep stats from fitting |
| leahmcguire <lmcguire@salesforce.com> |
| 2015-06-03 15:46:38 -0700 |
| Commit: d8662cd, github.com/apache/spark/pull/5915 |
| |
| [SPARK-8051] [MLLIB] make StringIndexerModel silent if input column does not exist |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-03 15:16:24 -0700 |
| Commit: 26c9d7a, github.com/apache/spark/pull/6595 |
| |
| [SPARK-3674] [EC2] Clear SPARK_WORKER_INSTANCES when using YARN |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-03 15:14:38 -0700 |
| Commit: d3e026f, github.com/apache/spark/pull/6424 |
| |
| [HOTFIX] Fix Hadoop-1 build caused by #5792. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-03 15:11:02 -0700 |
| Commit: a8f1f15, github.com/apache/spark/pull/6619 |
| |
| [SPARK-7989] [CORE] [TESTS] Fix flaky tests in ExternalShuffleServiceSuite and SparkListenerWithClusterSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-03 15:04:20 -0700 |
| Commit: f271347, github.com/apache/spark/pull/6546 |
| |
| [SPARK-8001] [CORE] Make AsynchronousListenerBus.waitUntilEmpty throw TimeoutException if timeout |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-03 15:03:07 -0700 |
| Commit: 1d8669f, github.com/apache/spark/pull/6550 |
| |
| [SPARK-8059] [YARN] Wake up allocation thread when new requests arrive. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-03 14:59:30 -0700 |
| Commit: aa40c44, github.com/apache/spark/pull/6600 |
| |
| [SPARK-8083] [MESOS] Use the correct base path in mesos driver page. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-06-03 14:57:23 -0700 |
| Commit: bfbf12b, github.com/apache/spark/pull/6615 |
| |
| [MINOR] [UI] Improve confusing message on log page |
| Andrew Or <andrew@databricks.com> |
| 2015-06-03 12:10:12 -0700 |
| Commit: c6a6dd0 |
| |
| [SPARK-8054] [MLLIB] Added several Java-friendly APIs + unit tests |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-06-03 14:34:20 -0700 |
| Commit: 20a26b5, github.com/apache/spark/pull/6562 |
| |
| Update documentation for [SPARK-7980] [SQL] Support SQLContext.range(end) |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-03 14:19:10 -0700 |
| Commit: 2c5a06c |
| |
| [SPARK-8074] Parquet should throw AnalysisException during setup for data type/name related failures. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-03 13:57:57 -0700 |
| Commit: 939e4f3, github.com/apache/spark/pull/6608 |
| |
| [SPARK-8063] [SPARKR] Spark master URL conflict between MASTER env variable and --master command line option. |
| Sun Rui <rui.sun@intel.com> |
| 2015-06-03 11:56:35 -0700 |
| Commit: 708c63b, github.com/apache/spark/pull/6605 |
| |
| [SPARK-7161] [HISTORY SERVER] Provide REST api to download event logs fro... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-06-03 13:43:13 -0500 |
| Commit: d2a86eb, github.com/apache/spark/pull/5792 |
| |
| [SPARK-7980] [SQL] Support SQLContext.range(end) |
| animesh <animesh@apache.spark> |
| 2015-06-03 11:28:18 -0700 |
| Commit: d053a31, github.com/apache/spark/pull/6609 |
| |
| [SPARK-7801] [BUILD] Updating versions to SPARK 1.5.0 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-06-03 10:11:27 -0700 |
| Commit: 2c4d550, github.com/apache/spark/pull/6328 |
| |
| [SPARK-7973] [SQL] Increase the timeout of two CliSuite tests. |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-03 09:26:21 -0700 |
| Commit: f1646e1, github.com/apache/spark/pull/6525 |
| |
| [SPARK-7983] [MLLIB] Add require for one-based indices in loadLibSVMFile |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-06-03 13:15:57 +0200 |
| Commit: 28dbde3, github.com/apache/spark/pull/6538 |
| |
| [SPARK-7562][SPARK-6444][SQL] Improve error reporting for expression data type mismatch |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-06-03 00:47:52 -0700 |
| Commit: d38cf21, github.com/apache/spark/pull/6405 |
| |
| [SPARK-8060] Improve DataFrame Python test coverage and documentation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-03 00:23:34 -0700 |
| Commit: ce320cb, github.com/apache/spark/pull/6601 |
| |
| [SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-06-02 23:24:47 -0700 |
| Commit: 452eb82, github.com/apache/spark/pull/6579 |
| |
| [SPARK-8043] [MLLIB] [DOC] update NaiveBayes and SVM examples in doc |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-06-02 23:15:38 -0700 |
| Commit: 43adbd5, github.com/apache/spark/pull/6584 |
| |
| [MINOR] make the launcher project name consistent with others |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-06-02 22:59:48 -0700 |
| Commit: ccaa823, github.com/apache/spark/pull/6603 |
| |
| [SPARK-8053] [MLLIB] renamed scalingVector to scalingVec |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-06-02 22:56:56 -0700 |
| Commit: 07c16cb, github.com/apache/spark/pull/6596 |
| |
| [SPARK-7691] [SQL] Refactor CatalystTypeConverter to use type-specific row accessors |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-06-02 22:11:03 -0700 |
| Commit: cafd505, github.com/apache/spark/pull/6222 |
| |
| [SPARK-7547] [ML] Scala Example code for ElasticNet |
| DB Tsai <dbt@netflix.com> |
| 2015-06-02 19:12:08 -0700 |
| Commit: a86b3e9, github.com/apache/spark/pull/6576 |
| |
| [SPARK-7387] [ML] [DOC] CrossValidator example code in Python |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-06-02 18:53:04 -0700 |
| Commit: c3f4c32, github.com/apache/spark/pull/6358 |
| |
| [SQL] [TEST] [MINOR] Follow-up of PR #6493, use Guava API to ensure Java 6 friendliness |
| Cheng Lian <lian@databricks.com> |
| 2015-06-02 17:07:13 -0700 |
| Commit: 5cd6a63, github.com/apache/spark/pull/6547 |
| |
| [SPARK-8049] [MLLIB] drop tmp col from OneVsRest output |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-02 16:51:17 -0700 |
| Commit: 89f21f6, github.com/apache/spark/pull/6592 |
| |
| [SPARK-8038] [SQL] [PYSPARK] fix Column.when() and otherwise() |
| Davies Liu <davies@databricks.com> |
| 2015-06-02 13:38:06 -0700 |
| Commit: 605ddbb, github.com/apache/spark/pull/6590 |
| |
| [SPARK-8014] [SQL] Avoid premature metadata discovery when writing a HadoopFsRelation with a save mode other than Append |
| Cheng Lian <lian@databricks.com> |
| 2015-06-02 13:32:13 -0700 |
| Commit: 686a45f, github.com/apache/spark/pull/6583 |
| |
| [SPARK-7985] [ML] [MLlib] [Docs] Remove "fittingParamMap" references. Updating ML Doc "Estimator, Transformer, and Param" examples. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-06-02 12:38:14 -0700 |
| Commit: ad06727, github.com/apache/spark/pull/6514 |
| |
| [SPARK-8015] [FLUME] Remove Guava dependency from flume-sink. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-06-02 11:20:33 -0700 |
| Commit: 0071bd8, github.com/apache/spark/pull/6555 |
| |
| [SPARK-8037] [SQL] Ignores files whose name starts with dot in HadoopFsRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-06-03 00:59:50 +0800 |
| Commit: 1bb5d71, github.com/apache/spark/pull/6581 |
| |
| [SPARK-7432] [MLLIB] fix flaky CrossValidator doctest |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-02 08:51:00 -0700 |
| Commit: bd97840, github.com/apache/spark/pull/6572 |
| |
| [SPARK-8021] [SQL] [PYSPARK] make Python read/write API consistent with Scala |
| Davies Liu <davies@databricks.com> |
| 2015-06-02 08:37:18 -0700 |
| Commit: 445647a, github.com/apache/spark/pull/6578 |
| |
| [SPARK-8023][SQL] Add "deterministic" attribute to Expression to avoid collapsing nondeterministic projects. |
| Yin Huai <yhuai@databricks.com>, Reynold Xin <rxin@databricks.com> |
| 2015-06-02 00:20:52 -0700 |
| Commit: 0f80990, github.com/apache/spark/pull/6573 |
| |
| [SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get constructed too early |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-02 00:16:56 -0700 |
| Commit: 7b7f7b6, github.com/apache/spark/pull/6571 |
| |
| [SPARK-6917] [SQL] DecimalType is not read back when non-native type exists |
| Davies Liu <davies@databricks.com> |
| 2015-06-01 23:12:29 -0700 |
| Commit: bcb47ad, github.com/apache/spark/pull/6558 |
| |
| [SPARK-7582] [MLLIB] user guide for StringIndexer |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-01 22:03:29 -0700 |
| Commit: 0221c7f, github.com/apache/spark/pull/6561 |
| |
| Fixed typo in the previous commit. |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-01 21:41:53 -0700 |
| Commit: b53a011 |
| |
| [SPARK-7965] [SPARK-7972] [SQL] Handle expressions containing multiple window expressions and make parser match window frames in case insensitive way |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-01 21:40:17 -0700 |
| Commit: e797dba, github.com/apache/spark/pull/6524 |
| |
| [SPARK-8025][Streaming]Add JavaDoc style deprecation for deprecated Streaming methods |
| zsxwing <zsxwing@gmail.com> |
| 2015-06-01 21:36:49 -0700 |
| Commit: 7f74bb3, github.com/apache/spark/pull/6564 |
| |
| Revert "[SPARK-8020] Spark SQL in spark-defaults.conf make metadataHive get constructed too early" |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-01 21:35:55 -0700 |
| Commit: 75dda33 |
| |
| [SPARK-8020] Spark SQL in spark-defaults.conf make metadataHive get constructed too early |
| Yin Huai <yhuai@databricks.com> |
| 2015-06-01 21:33:57 -0700 |
| Commit: 91f6be8, github.com/apache/spark/pull/6563 |
| |
| [minor doc] Add exploratory data analysis warning for DataFrame.stat.freqItem API |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-01 21:29:39 -0700 |
| Commit: 4c868b9, github.com/apache/spark/pull/6569 |
| |
| [SPARK-8027] [SPARKR] Add maven profile to build R package docs |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-01 21:21:45 -0700 |
| Commit: cae9306, github.com/apache/spark/pull/6567 |
| |
| [SPARK-8026][SQL] Add Column.alias to Scala/Java DataFrame API |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-01 21:13:15 -0700 |
| Commit: 89f642a, github.com/apache/spark/pull/6565 |
| |
| [SPARK-7982][SQL] DataFrame.stat.crosstab should use 0 instead of null for pairs that don't appear |
| Reynold Xin <rxin@databricks.com> |
| 2015-06-01 21:11:19 -0700 |
| Commit: 6396cc0, github.com/apache/spark/pull/6566 |
| |
| [SPARK-8028] [SPARKR] Use addJar instead of setJars in SparkR |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-06-01 21:01:14 -0700 |
| Commit: 6b44278, github.com/apache/spark/pull/6568 |
| |
| [MINOR] [UI] Improve error message on log page |
| Andrew Or <andrew@databricks.com> |
| 2015-06-01 19:39:03 -0700 |
| Commit: 15d7c90 |
| |
| [SPARK-7958] [STREAMING] Handled exception in StreamingContext.start() to prevent leaking of actors |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-06-01 20:04:57 -0700 |
| Commit: 2f9c751, github.com/apache/spark/pull/6559 |
| |
| [SPARK-7584] [MLLIB] User guide for VectorAssembler |
| Xiangrui Meng <meng@databricks.com> |
| 2015-06-01 15:05:14 -0700 |
| Commit: 90c6069, github.com/apache/spark/pull/6556 |
| |
| [SPARK-7497] [PYSPARK] [STREAMING] fix streaming flaky tests |
| Davies Liu <davies@databricks.com> |
| 2015-06-01 14:40:08 -0700 |
| Commit: b7ab029, github.com/apache/spark/pull/6239 |
| |
| [DOC] Minor modification to Streaming docs with regards to parallel data receiving |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-06-01 21:34:41 +0100 |
| Commit: e7c7e51, github.com/apache/spark/pull/6544 |
| |
| Update README to include DataFrames and zinc. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 23:55:45 -0700 |
| Commit: 3c01568, github.com/apache/spark/pull/6548 |
| |
| [SPARK-7952][SPARK-7984][SQL] equality check between boolean type and numeric type is broken. |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-31 21:01:46 -0700 |
| Commit: a0e46a0, github.com/apache/spark/pull/6505 |
| |
| [SPARK-7978] [SQL] [PYSPARK] DecimalType should not be singleton |
| Davies Liu <davies@databricks.com> |
| 2015-05-31 19:55:57 -0700 |
| Commit: 91777a1, github.com/apache/spark/pull/6532 |
| |
| [SPARK-7986] Split scalastyle config into 3 sections. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 18:04:57 -0700 |
| Commit: 6f006b5, github.com/apache/spark/pull/6543 |
| |
| [MINOR] Enable PySpark SQL readerwriter and window tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-31 15:17:05 -0700 |
| Commit: 9126ea4, github.com/apache/spark/pull/6542 |
| |
| [SPARK-7227] [SPARKR] Support fillna / dropna in R DataFrame. |
| Sun Rui <rui.sun@intel.com> |
| 2015-05-31 15:01:21 -0700 |
| Commit: 46576ab, github.com/apache/spark/pull/6183 |
| |
| [SPARK-3850] Turn style checker on for trailing whitespaces. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 14:23:42 -0700 |
| Commit: 866652c, github.com/apache/spark/pull/6541 |
| |
| [SPARK-7949] [MLLIB] [DOC] update document with some missing save/load |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-05-31 11:51:49 -0700 |
| Commit: 0674700, github.com/apache/spark/pull/6498 |
| |
| [SPARK-3850] Trim trailing spaces for MLlib. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 11:35:30 -0700 |
| Commit: e1067d0, github.com/apache/spark/pull/6534 |
| |
| [MINOR] Add license for dagre-d3 and graphlib-dot |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-31 11:18:12 -0700 |
| Commit: d1d2def, github.com/apache/spark/pull/6539 |
| |
| [SPARK-7979] Enforce structural type checker. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 01:37:56 -0700 |
| Commit: 4b5f12b, github.com/apache/spark/pull/6536 |
| |
| [SPARK-3850] Trim trailing spaces for SQL. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 00:48:49 -0700 |
| Commit: 63a50be, github.com/apache/spark/pull/6535 |
| |
| [SPARK-3850] Trim trailing spaces for examples/streaming/yarn. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 00:47:56 -0700 |
| Commit: 564bc11, github.com/apache/spark/pull/6530 |
| |
| [SPARK-3850] Trim trailing spaces for core. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 00:16:22 -0700 |
| Commit: 74fdc97, github.com/apache/spark/pull/6533 |
| |
| [SPARK-7975] Add style checker to disallow overriding equals covariantly. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-31 00:05:55 -0700 |
| Commit: 7896e99, github.com/apache/spark/pull/6527 |
| |
| [SQL] [MINOR] Adds @deprecated Scaladoc entry for SchemaRDD |
| Cheng Lian <lian@databricks.com> |
| 2015-05-30 23:49:42 -0700 |
| Commit: 8764dcc, github.com/apache/spark/pull/6529 |
| |
| [SPARK-7976] Add style checker to disallow overriding finalize. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 23:36:32 -0700 |
| Commit: 084fef7, github.com/apache/spark/pull/6528 |
| |
| [SQL] [MINOR] Fixes a minor comment mistake in IsolatedClientLoader |
| Cheng Lian <lian@databricks.com> |
| 2015-05-31 12:56:41 +0800 |
| Commit: f7fe9e4, github.com/apache/spark/pull/6521 |
| |
| Update documentation for the new DataFrame reader/writer interface. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 20:10:02 -0700 |
| Commit: 00a7137, github.com/apache/spark/pull/6522 |
| |
| [SPARK-7971] Add JavaDoc style deprecation for deprecated DataFrame methods |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 19:51:53 -0700 |
| Commit: c63e1a7, github.com/apache/spark/pull/6523 |
| |
| [SQL] Tighten up visibility for JavaDoc. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 19:50:52 -0700 |
| Commit: 14b314d, github.com/apache/spark/pull/6526 |
| |
| [SPARK-5610] [DOC] update genjavadocSettings to use the patched version of genjavadoc |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-30 17:21:41 -0700 |
| Commit: 2b258e1, github.com/apache/spark/pull/6506 |
| |
| [HOTFIX] Replace FunSuite with SparkFunSuite. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-30 16:52:34 -0700 |
| Commit: 66a53a6 |
| |
| [SPARK-7920] [MLLIB] Make MLlib ChiSqSelector Serializable (& Fix Related Documentation Example). |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-30 16:50:59 -0700 |
| Commit: 1281a35, github.com/apache/spark/pull/6462 |
| |
| [SPARK-7918] [MLLIB] MLlib Python doc parity check for evaluation and feature |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-30 16:24:07 -0700 |
| Commit: 1617363, github.com/apache/spark/pull/6461 |
| |
| [SPARK-7855] Move bypassMergeSort-handling from ExternalSorter to own component |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-30 15:27:51 -0700 |
| Commit: a643002, github.com/apache/spark/pull/6397 |
| |
| Updated SQL programming guide's Hive connectivity section. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 14:57:23 -0700 |
| Commit: 7716a5a |
| |
| [SPARK-7849] [SQL] [Docs] Updates SQL programming guide for 1.4 |
| Cheng Lian <lian@databricks.com> |
| 2015-05-30 12:16:09 -0700 |
| Commit: 6e3f0c7, github.com/apache/spark/pull/6520 |
| |
| Closes #4685 |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-30 12:06:38 -0700 |
| Commit: d34b43b |
| |
| [DOCS] [MINOR] Update for the Hadoop versions table with hadoop-2.6 |
| Taka Shinagawa <taka.epsilon@gmail.com> |
| 2015-05-30 08:25:21 -0400 |
| Commit: 3ab71eb, github.com/apache/spark/pull/6450 |
| |
| [SPARK-7717] [WEBUI] Only showing total memory and cores for alive workers |
| zhichao.li <zhichao.li@intel.com> |
| 2015-05-30 08:06:11 -0400 |
| Commit: 2b35c99, github.com/apache/spark/pull/6317 |
| |
| [SPARK-7945] [CORE] Do trim to values in properties file |
| WangTaoTheTonic <wangtao111@huawei.com>, Tao Wang <wangtao111@huawei.com> |
| 2015-05-30 08:04:27 -0400 |
| Commit: 9d8aadb, github.com/apache/spark/pull/6496 |
| |
| [SPARK-7890] [DOCS] Document that Spark 2.11 now supports Kafka |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-30 07:59:27 -0400 |
| Commit: 8c8de3e, github.com/apache/spark/pull/6470 |
| |
| [SPARK-7964][SQL] remove unnecessary type coercion rule |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-30 00:26:46 -0700 |
| Commit: 0978aec, github.com/apache/spark/pull/6516 |
| |
| [SPARK-7459] [MLLIB] ElementwiseProduct Java example |
| Octavian Geagla <ogeagla@gmail.com> |
| 2015-05-30 00:00:36 -0700 |
| Commit: e3a4374, github.com/apache/spark/pull/6008 |
| |
| [SPARK-7962] [MESOS] Fix master url parsing in rest submission client. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-05-29 23:56:18 -0700 |
| Commit: 78657d5, github.com/apache/spark/pull/6517 |
| |
| [SPARK-7576] [MLLIB] Add spark.ml user guide doc/example for ElementwiseProduct |
| Octavian Geagla <ogeagla@gmail.com> |
| 2015-05-29 23:55:19 -0700 |
| Commit: da2112a, github.com/apache/spark/pull/6501 |
| |
| [TRIVIAL] Typo fix for last commit |
| Andrew Or <andrew@databricks.com> |
| 2015-05-29 23:08:47 -0700 |
| Commit: 193dba0 |
| |
| [SPARK-7558] Guard against direct uses of FunSuite / FunSuiteLike |
| Andrew Or <andrew@databricks.com> |
| 2015-05-29 22:57:46 -0700 |
| Commit: 609c492, github.com/apache/spark/pull/6510 |
| |
| [SPARK-7957] Preserve partitioning when using randomSplit |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-29 22:19:15 -0700 |
| Commit: 7ed06c3, github.com/apache/spark/pull/6509 |
| |
| [DOCS][Tiny] Added a missing dash(-) in docs/configuration.md |
| Taka Shinagawa <taka.epsilon@gmail.com> |
| 2015-05-29 20:35:14 -0700 |
| Commit: 3792d25, github.com/apache/spark/pull/6513 |
| |
| [HOT FIX] [BUILD] Fix maven build failures |
| Andrew Or <andrew@databricks.com> |
| 2015-05-29 17:19:46 -0700 |
| Commit: a4f2412, github.com/apache/spark/pull/6511 |
| |
| [HOTFIX] [SQL] Maven test compilation issue |
| Andrew Or <andrew@databricks.com> |
| 2015-05-29 15:26:49 -0700 |
| Commit: 8c99793 |
| |
| [SPARK-6013] [ML] Add more Python ML examples for spark.ml |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-29 15:22:26 -0700 |
| Commit: dbf8ff3, github.com/apache/spark/pull/6443 |
| |
| [SPARK-7954] [SPARKR] Create SparkContext in sparkRSQL init |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-29 15:08:30 -0700 |
| Commit: 5fb97dc, github.com/apache/spark/pull/6507 |
| |
| [SPARK-7910] [TINY] [JAVAAPI] expose partitioner information in javardd |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-05-29 14:59:18 -0700 |
| Commit: 82a396c, github.com/apache/spark/pull/6464 |
| |
| [SPARK-7899] [PYSPARK] Fix Python 3 pyspark/sql/types module conflict |
| Michael Nazario <mnazario@palantir.com> |
| 2015-05-29 14:13:44 -0700 |
| Commit: 1c5b198, github.com/apache/spark/pull/6439 |
| |
| [SPARK-6806] [SPARKR] [DOCS] Add a new SparkR programming guide |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-29 14:11:58 -0700 |
| Commit: 5f48e5c, github.com/apache/spark/pull/6490 |
| |
| [SPARK-7558] Demarcate tests in unit-tests.log |
| Andrew Or <andrew@databricks.com> |
| 2015-05-29 14:03:12 -0700 |
| Commit: 9eb222c, github.com/apache/spark/pull/6441 |
| |
| [SPARK-7940] Enforce whitespace checking for DO, TRY, CATCH, FINALLY, MATCH, LARROW, RARROW in style checker. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-29 13:38:37 -0700 |
| Commit: 94f62a4, github.com/apache/spark/pull/6491 |
| |
| [SPARK-7946] [MLLIB] DecayFactor wrongly set in StreamingKMeans |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-29 11:36:41 -0700 |
| Commit: 6181937, github.com/apache/spark/pull/6497 |
| |
| [SQL] [TEST] [MINOR] Uses a temporary log4j.properties in HiveThriftServer2Test to ensure expected logging behavior |
| Cheng Lian <lian@databricks.com> |
| 2015-05-29 11:11:40 -0700 |
| Commit: 4782e13, github.com/apache/spark/pull/6493 |
| |
| [SPARK-7950] [SQL] Sets spark.sql.hive.version in HiveThriftServer2.startWithContext() |
| Cheng Lian <lian@databricks.com> |
| 2015-05-29 10:43:34 -0700 |
| Commit: e7b6177, github.com/apache/spark/pull/6500 |
| |
| [SPARK-7524] [SPARK-7846] add configs for keytab and principal, pass these two configs with different way in different modes |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-05-29 11:06:11 -0500 |
| Commit: a51b133, github.com/apache/spark/pull/6051 |
| |
| [SPARK-7863] [CORE] Create SimpleDateFormat for every SimpleDateParam instance because it's not thread-safe |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-29 05:17:41 -0400 |
| Commit: 8db40f6, github.com/apache/spark/pull/6406 |
| |
| [SPARK-7756] [CORE] Use testing cipher suites common to Oracle and IBM security providers |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-29 05:14:43 -0400 |
| Commit: bf46580, github.com/apache/spark/pull/6282 |
| |
| [SPARK-7912] [SPARK-7921] [MLLIB] Update OneHotEncoder to handle ML attributes and change includeFirst to dropLast |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-29 00:51:12 -0700 |
| Commit: 23452be, github.com/apache/spark/pull/6466 |
| |
| [SPARK-7929] Turn whitespace checker on for more token types. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 23:00:02 -0700 |
| Commit: 97a60cf, github.com/apache/spark/pull/6487 |
| |
| [HOTFIX] Minor style fix from last commit |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-28 22:48:02 -0700 |
| Commit: 36067ce |
| |
| [SPARK-7931] [STREAMING] Do not restart receiver when stopped |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-28 22:39:21 -0700 |
| Commit: e714ecf, github.com/apache/spark/pull/6483 |
| |
| [SPARK-7922] [MLLIB] use DataFrames for user/item factors in ALSModel |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 22:38:38 -0700 |
| Commit: db95137, github.com/apache/spark/pull/6468 |
| |
| [SPARK-7930] [CORE] [STREAMING] Fixed shutdown hook priorities |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-28 22:28:13 -0700 |
| Commit: cd3d9a5, github.com/apache/spark/pull/6482 |
| |
| [SPARK-7932] Fix misleading scheduler delay visualization |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-28 22:09:49 -0700 |
| Commit: 04ddcd4, github.com/apache/spark/pull/6484 |
| |
| [MINOR] fix RegressionEvaluator doc |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 21:26:43 -0700 |
| Commit: 834e699, github.com/apache/spark/pull/6469 |
| |
| [SPARK-7926] [PYSPARK] use the official Pyrolite release |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 21:20:54 -0700 |
| Commit: c45d58c, github.com/apache/spark/pull/6472 |
| |
| [SPARK-7927] whitespace fixes for GraphX. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 20:17:16 -0700 |
| Commit: b069ad2, github.com/apache/spark/pull/6474 |
| |
| [SPARK-7927] whitespace fixes for core. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 20:15:52 -0700 |
| Commit: 7f7505d, github.com/apache/spark/pull/6473 |
| |
| [SPARK-7927] whitespace fixes for Catalyst module. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 20:11:57 -0700 |
| Commit: 8da560d, github.com/apache/spark/pull/6476 |
| |
| [SPARK-7929] Remove Bagel examples & whitespace fix for examples. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 20:11:04 -0700 |
| Commit: 2881d14, github.com/apache/spark/pull/6480 |
| |
| [SPARK-7927] whitespace fixes for SQL core. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 20:10:21 -0700 |
| Commit: ff44c71, github.com/apache/spark/pull/6477 |
| |
| [SPARK-7927] [MLLIB] Enforce whitespace for more tokens in style checker |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 20:09:12 -0700 |
| Commit: 04616b1, github.com/apache/spark/pull/6481 |
| |
| [SPARK-7826] [CORE] Suppress extra calling getCacheLocs. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2015-05-28 19:05:12 -0700 |
| Commit: 9b692bf, github.com/apache/spark/pull/6352 |
| |
| [SPARK-7933] Remove Patrick's username/pw from merge script |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-28 19:04:32 -0700 |
| Commit: 66c49ed, github.com/apache/spark/pull/6485 |
| |
| [SPARK-7927] whitespace fixes for Hive and ThriftServer. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 18:08:56 -0700 |
| Commit: ee6a0e1, github.com/apache/spark/pull/6478 |
| |
| [SPARK-7927] whitespace fixes for streaming. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 17:55:22 -0700 |
| Commit: 3af0b31, github.com/apache/spark/pull/6475 |
| |
| [SPARK-7577] [ML] [DOC] add bucketizer doc |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-05-28 17:30:12 -0700 |
| Commit: 1bd63e8, github.com/apache/spark/pull/6451 |
| |
| [SPARK-7853] [SQL] Fix HiveContext in Spark Shell |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-28 17:12:30 -0700 |
| Commit: 572b62c, github.com/apache/spark/pull/6459 |
| |
| Remove SizeEstimator from o.a.spark package. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-28 16:56:59 -0700 |
| Commit: 0077af2, github.com/apache/spark/pull/6471 |
| |
| [SPARK-7198] [MLLIB] VectorAssembler should output ML attributes |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 16:32:51 -0700 |
| Commit: 7859ab6, github.com/apache/spark/pull/6452 |
| |
| [DOCS] Fixing broken "IDE setup" link in the Building Spark documentation. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-28 17:15:10 -0400 |
| Commit: 3e312a5, github.com/apache/spark/pull/6467 |
| |
| [MINOR] Fix the a minor bug in PageRank Example. |
| Li Yao <hnkfliyao@gmail.com> |
| 2015-05-28 13:39:39 -0700 |
| Commit: c771589, github.com/apache/spark/pull/6455 |
| |
| [SPARK-7911] [MLLIB] A workaround for VectorUDT serialize (or deserialize) being called multiple times |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-28 12:03:46 -0700 |
| Commit: 530efe3, github.com/apache/spark/pull/6442 |
| |
| [SPARK-7895] [STREAMING] [EXAMPLES] Move Kafka examples from scala-2.10/src to src |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-28 09:04:12 -0700 |
| Commit: 000df2f, github.com/apache/spark/pull/6436 |
| |
| [SPARK-7782] fixed sort arrow issue |
| zuxqoj <sbshekhar@gmail.com> |
| 2015-05-27 23:13:13 -0700 |
| Commit: e838a25, github.com/apache/spark/pull/6437 |
| |
| [DOCS] Fix typo in documentation for Java UDF registration |
| Matt Wise <mwise@quixey.com> |
| 2015-05-27 22:39:19 -0700 |
| Commit: 3541061, github.com/apache/spark/pull/6447 |
| |
| [SPARK-7896] Allow ChainedBuffer to store more than 2 GB |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-27 22:23:22 -0700 |
| Commit: bd11b01, github.com/apache/spark/pull/6440 |
| |
| [SPARK-7873] Allow KryoSerializerInstance to create multiple streams at the same time |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-27 20:19:53 -0700 |
| Commit: 852f4de, github.com/apache/spark/pull/6415 |
| |
| [SPARK-7907] [SQL] [UI] Rename tab ThriftServer to SQL. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-27 20:04:29 -0700 |
| Commit: 3c1f1ba, github.com/apache/spark/pull/6448 |
| |
| [SPARK-7897][SQL] Use DecimalType to represent unsigned bigint in JDBCRDD |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-27 18:51:36 -0700 |
| Commit: a1e092e, github.com/apache/spark/pull/6438 |
| |
| [SPARK-7853] [SQL] Fixes a class loader issue in Spark SQL |
| Cheng Hao <hao.cheng@intel.com>, Cheng Lian <lian@databricks.com>, Yin Huai <yhuai@databricks.com> |
| 2015-05-27 14:21:00 -0700 |
| Commit: db3fd05, github.com/apache/spark/pull/6435 |
| |
| [SPARK-7684] [SQL] Refactoring MetastoreDataSourcesSuite to workaround SPARK-7684 |
| Cheng Lian <lian@databricks.com>, Yin Huai <yhuai@databricks.com> |
| 2015-05-27 13:09:33 -0700 |
| Commit: b97ddff, github.com/apache/spark/pull/6353 |
| |
| [SPARK-7790] [SQL] date and decimal conversion for dynamic partition key |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-27 12:42:13 -0700 |
| Commit: 8161562, github.com/apache/spark/pull/6318 |
| |
| Removed Guava dependency from JavaTypeInference's type signature. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-27 11:54:35 -0700 |
| Commit: 6fec1a9, github.com/apache/spark/pull/6431 |
| |
| [SPARK-7864] [UI] Fix the logic grabbing the link from table in AllJobPage |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-27 11:41:35 -0700 |
| Commit: 0db76c9, github.com/apache/spark/pull/6432 |
| |
| [SPARK-7847] [SQL] Fixes dynamic partition directory escaping |
| Cheng Lian <lian@databricks.com> |
| 2015-05-27 10:09:12 -0700 |
| Commit: 15459db, github.com/apache/spark/pull/6389 |
| |
| [SPARK-7878] Rename Stage.jobId to firstJobId |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-27 09:32:29 -0700 |
| Commit: ff0ddff, github.com/apache/spark/pull/6418 |
| |
| [CORE] [TEST] HistoryServerSuite failed due to timezone issue |
| scwf <wangfei1@huawei.com> |
| 2015-05-27 09:12:18 -0500 |
| Commit: 4615081, github.com/apache/spark/pull/6425 |
| |
| [SQL] Rename MathematicalExpression UnaryMathExpression, and specify BinaryMathExpression's output data type as DoubleType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-27 01:13:57 -0700 |
| Commit: 3e7d7d6, github.com/apache/spark/pull/6428 |
| |
| [SPARK-7887][SQL] Remove EvaluatedType from SQL Expression. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-27 01:12:59 -0700 |
| Commit: 9f48bf6, github.com/apache/spark/pull/6427 |
| |
| [SPARK-7697][SQL] Use LongType for unsigned int in JDBCRDD |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-27 00:27:39 -0700 |
| Commit: 4f98d7a, github.com/apache/spark/pull/6229 |
| |
| [SPARK-7850][BUILD] Hive 0.12.0 profile in POM should be removed |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-05-27 00:18:42 -0700 |
| Commit: 6dd6458, github.com/apache/spark/pull/6393 |
| |
| [SPARK-7535] [.1] [MLLIB] minor changes to the pipeline API |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-26 23:51:32 -0700 |
| Commit: a9f1c0c, github.com/apache/spark/pull/6392 |
| |
| [SPARK-7868] [SQL] Ignores _temporary directories in HadoopFsRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-05-26 20:48:56 -0700 |
| Commit: b463e6d, github.com/apache/spark/pull/6411 |
| |
| [SPARK-7858] [SQL] Use output schema, not relation schema, for data source input conversion |
| Josh Rosen <joshrosen@databricks.com>, Cheng Lian <lian@databricks.com>, Cheng Lian <liancheng@users.noreply.github.com> |
| 2015-05-26 20:24:35 -0700 |
| Commit: 0c33c7b, github.com/apache/spark/pull/5986. |
| |
| [SPARK-7637] [SQL] O(N) merge implementation for StructType merge |
| rowan <rowan.chattaway@googlemail.com> |
| 2015-05-26 18:17:16 -0700 |
| Commit: 0366834, github.com/apache/spark/pull/6259 |
| |
| [SPARK-7883] [DOCS] [MLLIB] Fixing broken trainImplicit Scala example in MLlib Collaborative Filtering documentation. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-26 18:08:57 -0700 |
| Commit: 0463428, github.com/apache/spark/pull/6422 |
| |
| [SPARK-7864] [UI] Do not kill innocent stages from visualization |
| Andrew Or <andrew@databricks.com> |
| 2015-05-26 16:31:34 -0700 |
| Commit: 8f20824, github.com/apache/spark/pull/6419 |
| |
| [SPARK-7748] [MLLIB] Graduate spark.ml from alpha |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-26 15:51:31 -0700 |
| Commit: 836a758, github.com/apache/spark/pull/6417 |
| |
| [SPARK-6602] [CORE] Remove some places in core that calling SparkEnv.actorSystem |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-26 15:28:49 -0700 |
| Commit: 9f74224, github.com/apache/spark/pull/6333 |
| |
| [SPARK-3674] YARN support in Spark EC2 |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-26 15:01:27 -0700 |
| Commit: 2e9a5f2, github.com/apache/spark/pull/6376 |
| |
| [SPARK-7844] [MLLIB] Fix broken tests in KernelDensity |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-26 13:21:00 -0700 |
| Commit: 6166473, github.com/apache/spark/pull/6383 |
| |
| Revert "[SPARK-7042] [BUILD] use the standard akka artifacts with hadoop-2.x" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-26 10:05:13 -0700 |
| Commit: b7d8085 |
| |
| [SPARK-7854] [TEST] refine Kryo test suite |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-05-26 17:08:16 +0100 |
| Commit: 6309912, github.com/apache/spark/pull/6395 |
| |
| [DOCS] [MLLIB] Fixing misformatted links in v1.4 MLlib Naive Bayes documentation by removing space and newline characters. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-26 17:05:58 +0100 |
| Commit: e5a63a0, github.com/apache/spark/pull/6412 |
| |
| [SPARK-7806][EC2] Fixes that allow the spark_ec2.py tool to run with Python3 |
| meawoppl <meawoppl@gmail.com> |
| 2015-05-26 09:02:25 -0700 |
| Commit: 8dbe777, github.com/apache/spark/pull/6336 |
| |
| [SPARK-7339] [PYSPARK] PySpark shuffle spill memory sometimes are not correct |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-26 08:35:39 -0700 |
| Commit: 8948ad3, github.com/apache/spark/pull/5887 |
| |
| [CORE] [TEST] Fix SimpleDateParamTest |
| scwf <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-05-26 08:42:52 -0500 |
| Commit: bf49c22, github.com/apache/spark/pull/6377 |
| |
| [SPARK-7042] [BUILD] use the standard akka artifacts with hadoop-2.x |
| Konstantin Shaposhnikov <Konstantin.Shaposhnikov@sc.com> |
| 2015-05-26 07:49:32 +0100 |
| Commit: 43aa819, github.com/apache/spark/pull/6341 |
| |
| [SQL][minor] Removed unused Catalyst logical plan DSL. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-25 23:09:22 -0700 |
| Commit: c9adcad, github.com/apache/spark/pull/6350 |
| |
| [SPARK-7832] [Build] Always run SQL tests in master build. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-25 18:23:58 -0700 |
| Commit: f38e619, github.com/apache/spark/pull/6385 |
| |
| [SPARK-6391][DOCS] Document Tachyon compatibility. |
| Calvin Jia <jia.calvin@gmail.com> |
| 2015-05-25 16:50:43 -0700 |
| Commit: ce0051d, github.com/apache/spark/pull/6382 |
| |
| [SPARK-7842] [SQL] Makes task committing/aborting in InsertIntoHadoopFsRelation more robust |
| Cheng Lian <lian@databricks.com> |
| 2015-05-26 00:28:47 +0800 |
| Commit: 8af1bf1, github.com/apache/spark/pull/6378 |
| |
| [SPARK-7684] [SQL] Invoking HiveContext.newTemporaryConfiguration() shouldn't create new metastore directory |
| Cheng Lian <lian@databricks.com> |
| 2015-05-26 00:16:06 +0800 |
| Commit: bfeedc6, github.com/apache/spark/pull/6359 |
| |
| Add test which shows Kryo buffer size configured in mb is properly supported |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-25 08:20:31 +0100 |
| Commit: fd31fd4, github.com/apache/spark/pull/6390 |
| |
| Close HBaseAdmin at the end of HBaseTest |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-25 08:19:42 +0100 |
| Commit: 23bea97, github.com/apache/spark/pull/6381 |
| |
| [SPARK-7811] Fix typo on slf4j configuration on metrics.properties.temā¦ |
| Judy Nash <judynash@microsoft.com> |
| 2015-05-24 21:48:27 +0100 |
| Commit: 4f4ba8f, github.com/apache/spark/pull/6362 |
| |
| [SPARK-7833] [ML] Add python wrapper for RegressionEvaluator |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-24 10:36:02 -0700 |
| Commit: 65c696e, github.com/apache/spark/pull/6365 |
| |
| [SPARK-7805] [SQL] Move SQLTestUtils.scala and ParquetTest.scala to src/test |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-24 09:51:37 -0700 |
| Commit: ed21476, github.com/apache/spark/pull/6334 |
| |
| [SPARK-7845] [BUILD] Bump "Hadoop 1" tests to version 1.2.1 |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-24 09:49:57 -0700 |
| Commit: bfbc0df, github.com/apache/spark/pull/6384 |
| |
| [SPARK-7287] [HOTFIX] Disable o.a.s.deploy.SparkSubmitSuite --packages |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-23 19:44:03 -0700 |
| Commit: 3c1a2d0 |
| |
| [HOTFIX] Copy SparkR lib if it exists in make-distribution |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-23 12:28:16 -0700 |
| Commit: b231baa, github.com/apache/spark/pull/6379 |
| |
| [SPARK-7654] [SQL] Move insertInto into reader/writer interface. |
| Yin Huai <yhuai@databricks.com>, Reynold Xin <rxin@databricks.com> |
| 2015-05-23 09:48:20 -0700 |
| Commit: 2b7e635, github.com/apache/spark/pull/6366 |
| |
| Fix install jira-python |
| Davies Liu <davies@databricks.com> |
| 2015-05-23 09:14:07 -0700 |
| Commit: a4df0f2, github.com/apache/spark/pull/6367 |
| |
| [SPARK-7840] add insertInto() to Writer |
| Davies Liu <davies@databricks.com> |
| 2015-05-23 09:07:14 -0700 |
| Commit: be47af1, github.com/apache/spark/pull/6375 |
| |
| [SPARK-7322, SPARK-7836, SPARK-7822][SQL] DataFrame window function related updates |
| Davies Liu <davies@databricks.com>, Reynold Xin <rxin@databricks.com> |
| 2015-05-23 08:30:05 -0700 |
| Commit: efe3bfd, github.com/apache/spark/pull/6374 |
| |
| [SPARK-7777][Streaming] Handle the case when there is no block in a batch |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-23 02:11:17 -0700 |
| Commit: ad0badb, github.com/apache/spark/pull/6372 |
| |
| [SPARK-6811] Copy SparkR lib in make-distribution.sh |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-23 00:04:01 -0700 |
| Commit: a40bca0, github.com/apache/spark/pull/6373 |
| |
| [SPARK-6806] [SPARKR] [DOCS] Fill in SparkR examples in programming guide |
| Davies Liu <davies@databricks.com> |
| 2015-05-23 00:00:30 -0700 |
| Commit: 7af3818, github.com/apache/spark/pull/5442 |
| |
| [SPARK-5090] [EXAMPLES] The improvement of python converter for hbase |
| GenTang <gen.tang86@gmail.com> |
| 2015-05-22 23:37:03 -0700 |
| Commit: 4583cf4, github.com/apache/spark/pull/3920 |
| |
| [HOTFIX] Add tests for SparkListenerApplicationStart with Driver Logs. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-22 23:07:56 -0700 |
| Commit: 368b8c2, github.com/apache/spark/pull/6368 |
| |
| [SPARK-7838] [STREAMING] Set scope for kinesis stream |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-22 23:05:54 -0700 |
| Commit: baa8983, github.com/apache/spark/pull/6369 |
| |
| [MINOR] Add SparkR to create-release script |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-22 22:33:49 -0700 |
| Commit: 017b340, github.com/apache/spark/pull/6371 |
| |
| [SPARK-7795] [CORE] Speed up task scheduling in standalone mode by reusing serializer |
| Akshat Aranya <aaranya@quantcast.com> |
| 2015-05-22 22:03:31 -0700 |
| Commit: a163574, github.com/apache/spark/pull/6323 |
| |
| [SPARK-7830] [DOCS] [MLLIB] Adding logistic regression to the list of Multiclass Classification Supported Methods documentation |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-22 18:03:12 -0700 |
| Commit: 63a5ce7, github.com/apache/spark/pull/6357 |
| |
| [SPARK-7224] [SPARK-7306] mock repository generator for --packages tests without nio.Path |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-22 17:48:09 -0700 |
| Commit: 8014e1f, github.com/apache/spark/pull/5892 |
| |
| [SPARK-7788] Made KinesisReceiver.onStart() non-blocking |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-22 17:39:01 -0700 |
| Commit: 1c388a9, github.com/apache/spark/pull/6348 |
| |
| [SPARK-7771] [SPARK-7779] Dynamic allocation: lower default timeouts further |
| Andrew Or <andrew@databricks.com> |
| 2015-05-22 17:37:38 -0700 |
| Commit: 3d8760d, github.com/apache/spark/pull/6301 |
| |
| [SPARK-7834] [SQL] Better window error messages |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-22 17:23:12 -0700 |
| Commit: 3c13051, github.com/apache/spark/pull/6363 |
| |
| [SPARK-7760] add /json back into master & worker pages; add test |
| Imran Rashid <irashid@cloudera.com> |
| 2015-05-22 16:05:07 -0700 |
| Commit: 821254f, github.com/apache/spark/pull/6284 |
| |
| [SPARK-7270] [SQL] Consider dynamic partition when inserting into hive table |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-22 15:39:58 -0700 |
| Commit: 126d723, github.com/apache/spark/pull/5864 |
| |
| [SPARK-7724] [SQL] Support Intersect/Except in Catalyst DSL. |
| Santiago M. Mola <santi@mola.io> |
| 2015-05-22 15:10:27 -0700 |
| Commit: e4aef91, github.com/apache/spark/pull/6327 |
| |
| [SPARK-7758] [SQL] Override more configs to avoid failure when connect to a postgre sql |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-05-22 14:43:16 -0700 |
| Commit: 31d5d46, github.com/apache/spark/pull/6314 |
| |
| [SPARK-7766] KryoSerializerInstance reuse is unsafe when auto-reset is disabled |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-22 13:28:14 -0700 |
| Commit: eac0069, github.com/apache/spark/pull/6293 |
| |
| [SPARK-7574] [ML] [DOC] User guide for OneVsRest |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-22 13:18:08 -0700 |
| Commit: 509d55a, github.com/apache/spark/pull/6296 |
| |
| Revert "[BUILD] Always run SQL tests in master build." |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-22 10:04:45 -0700 |
| Commit: c63036c |
| |
| [SPARK-7404] [ML] Add RegressionEvaluator to spark.ml |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-22 09:59:44 -0700 |
| Commit: f490b3b, github.com/apache/spark/pull/6344 |
| |
| [SPARK-6743] [SQL] Fix empty projections of cached data |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-22 09:43:46 -0700 |
| Commit: 3b68cb0, github.com/apache/spark/pull/6165 |
| |
| [MINOR] [SQL] Ignores Thrift server UISeleniumSuite |
| Cheng Lian <lian@databricks.com> |
| 2015-05-22 16:25:52 +0800 |
| Commit: 4e5220c, github.com/apache/spark/pull/6345 |
| |
| [SPARK-7322][SQL] Window functions in DataFrame |
| Cheng Hao <hao.cheng@intel.com>, Reynold Xin <rxin@databricks.com> |
| 2015-05-22 01:00:16 -0700 |
| Commit: f6f2eeb, github.com/apache/spark/pull/6343 |
| |
| [SPARK-7578] [ML] [DOC] User guide for spark.ml Normalizer, IDF, StandardScaler |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-21 22:59:45 -0700 |
| Commit: 2728c3d, github.com/apache/spark/pull/6127 |
| |
| [SPARK-7535] [.0] [MLLIB] Audit the pipeline APIs for 1.4 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-21 22:57:33 -0700 |
| Commit: 8f11c61, github.com/apache/spark/pull/6322 |
| |
| [DOCS] [MLLIB] Fixing broken link in MLlib Linear Methods documentation. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-21 19:05:04 -0700 |
| Commit: e4136ea, github.com/apache/spark/pull/6340 |
| |
| [SPARK-7657] [YARN] Add driver logs links in application UI, in cluster mode. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-21 20:24:28 -0500 |
| Commit: 956c4c9, github.com/apache/spark/pull/6166 |
| |
| [SPARK-7219] [MLLIB] Output feature attributes in HashingTF |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-21 18:04:45 -0700 |
| Commit: 85b9637, github.com/apache/spark/pull/6308 |
| |
| [SPARK-7794] [MLLIB] update RegexTokenizer default settings |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-21 17:59:03 -0700 |
| Commit: f5db4b4, github.com/apache/spark/pull/6330 |
| |
| [SPARK-7783] [SQL] [PySpark] add DataFrame.rollup/cube in Python |
| Davies Liu <davies@databricks.com> |
| 2015-05-21 17:43:08 -0700 |
| Commit: 17791a5, github.com/apache/spark/pull/6311 |
| |
| [SPARK-7776] [STREAMING] Added shutdown hook to StreamingContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-21 17:41:31 -0700 |
| Commit: d68ea24, github.com/apache/spark/pull/6307 |
| |
| [SPARK-7737] [SQL] Use leaf dirs having data files to discover partitions. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-22 07:10:26 +0800 |
| Commit: 347b501, github.com/apache/spark/pull/6329 |
| |
| [BUILD] Always run SQL tests in master build. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-21 15:40:58 -0700 |
| Commit: 147b6be, github.com/apache/spark/pull/5955 |
| |
| [SPARK-7800] isDefined should not marked too early in putNewKey |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-21 23:12:00 +0100 |
| Commit: 5a3c04b, github.com/apache/spark/pull/6324 |
| |
| [SPARK-7718] [SQL] Speed up partitioning by avoiding closure cleaning |
| Andrew Or <andrew@databricks.com> |
| 2015-05-21 14:33:11 -0700 |
| Commit: 5287eec, github.com/apache/spark/pull/6256 |
| |
| [SPARK-7711] Add a startTime property to match the corresponding one in Scala |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-05-21 14:08:57 -0700 |
| Commit: 6b18cdc, github.com/apache/spark/pull/6275 |
| |
| [SPARK-7478] [SQL] Added SQLContext.getOrCreate |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-21 14:08:20 -0700 |
| Commit: 3d0cccc, github.com/apache/spark/pull/6006 |
| |
| [SPARK-7763] [SPARK-7616] [SQL] Persists partition columns into metastore |
| Yin Huai <yhuai@databricks.com>, Cheng Lian <lian@databricks.com> |
| 2015-05-21 13:51:40 -0700 |
| Commit: 30f3f55, github.com/apache/spark/pull/6285 |
| |
| [SPARK-7722] [STREAMING] Added Kinesis to style checker |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-21 13:50:08 -0700 |
| Commit: 311fab6, github.com/apache/spark/pull/6325 |
| |
| [SPARK-7498] [MLLIB] add varargs back to setDefault |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-21 13:06:53 -0700 |
| Commit: cdc7c05, github.com/apache/spark/pull/6320 |
| |
| [SPARK-7585] [ML] [DOC] VectorIndexer user guide section |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-21 13:05:48 -0700 |
| Commit: 6d75ed7, github.com/apache/spark/pull/6255 |
| |
| [SPARK-7775] YARN AM negative sleep exception |
| Andrew Or <andrew@databricks.com> |
| 2015-05-21 20:34:20 +0100 |
| Commit: 15680ae, github.com/apache/spark/pull/6305 |
| |
| [SQL] [TEST] udf_java_method failed due to jdk version |
| scwf <wangfei1@huawei.com> |
| 2015-05-21 12:31:58 -0700 |
| Commit: f6c486a, github.com/apache/spark/pull/6274 |
| |
| [SPARK-7793] [MLLIB] Use getOrElse for getting the threshold of SVM model |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-05-21 12:09:44 -0700 |
| Commit: 4f57200, github.com/apache/spark/pull/6321 |
| |
| [SPARK-7394][SQL] Add Pandas style cast (astype) |
| kaka1992 <kaka_1992@163.com> |
| 2015-05-21 11:50:39 -0700 |
| Commit: 699906e, github.com/apache/spark/pull/6313 |
| |
| [SPARK-6416] [DOCS] RDD.fold() requires the operator to be commutative |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-21 19:42:51 +0100 |
| Commit: 6e53402, github.com/apache/spark/pull/6231 |
| |
| [SPARK-7787] [STREAMING] Fix serialization issue of SerializableAWSCredentials |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-21 11:39:32 -0700 |
| Commit: 4b7ff30, github.com/apache/spark/pull/6316 |
| |
| [SPARK-7749] [SQL] Fixes partition discovery for non-partitioned tables |
| Cheng Lian <lian@databricks.com>, Yin Huai <yhuai@databricks.com> |
| 2015-05-21 10:56:17 -0700 |
| Commit: 8730fbb, github.com/apache/spark/pull/6287 |
| |
| [SPARK-7752] [MLLIB] Use lowercase letters for NaiveBayes.modelType |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-21 10:30:08 -0700 |
| Commit: 13348e2, github.com/apache/spark/pull/6277 |
| |
| [SPARK-7565] [SQL] fix MapType in JsonRDD |
| Davies Liu <davies@databricks.com> |
| 2015-05-21 09:58:47 -0700 |
| Commit: a25c1ab, github.com/apache/spark/pull/6084 |
| |
| [SPARK-7320] [SQL] [Minor] Move the testData into beforeAll() |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-21 09:28:00 -0700 |
| Commit: feb3a9d, github.com/apache/spark/pull/6312 |
| |
| [SPARK-7745] Change asserts to requires for user input checks in Spark Streaming |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-21 00:30:55 -0700 |
| Commit: 1ee8eb4, github.com/apache/spark/pull/6271 |
| |
| [SPARK-7753] [MLLIB] Update KernelDensity API |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-20 23:38:58 -0700 |
| Commit: 947ea1c, github.com/apache/spark/pull/6279 |
| |
| [SPARK-7606] [SQL] [PySpark] add version to Python SQL API docs |
| Davies Liu <davies@databricks.com> |
| 2015-05-20 23:05:54 -0700 |
| Commit: 8ddcb25, github.com/apache/spark/pull/6295 |
| |
| [SPARK-7389] [CORE] Tachyon integration improvement |
| Mingfei <mingfei.shi@intel.com> |
| 2015-05-20 22:33:03 -0700 |
| Commit: 04940c4, github.com/apache/spark/pull/5908 |
| |
| [SPARK-7746][SQL] Add FetchSize parameter for JDBC driver |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-20 22:23:49 -0700 |
| Commit: d0eb9ff, github.com/apache/spark/pull/6283 |
| |
| [SPARK-7774] [MLLIB] add sqlContext to MLlibTestSparkContext |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-20 20:30:39 -0700 |
| Commit: ddec173, github.com/apache/spark/pull/6303 |
| |
| [SPARK-7320] [SQL] Add Cube / Rollup for dataframe |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-20 19:58:22 -0700 |
| Commit: 42c592a, github.com/apache/spark/pull/6304 |
| |
| [SPARK-7777] [STREAMING] Fix the flaky test in org.apache.spark.streaming.BasicOperationsSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-20 19:56:01 -0700 |
| Commit: 895baf8, github.com/apache/spark/pull/6306 |
| |
| [SPARK-7750] [WEBUI] Rename endpoints from `json` to `api` to allow fuā¦ |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-20 21:13:10 -0500 |
| Commit: a70bf06, github.com/apache/spark/pull/6273 |
| |
| [SPARK-7719] Re-add UnsafeShuffleWriterSuite test that was removed for Java 6 compat |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-20 17:52:50 -0700 |
| Commit: 5196eff, github.com/apache/spark/pull/6298 |
| |
| [SPARK-7762] [MLLIB] set default value for outputCol |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-20 17:26:26 -0700 |
| Commit: c330e52, github.com/apache/spark/pull/6289 |
| |
| [SPARK-7251] Perform sequential scan when iterating over BytesToBytesMap |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-20 16:42:49 -0700 |
| Commit: f2faa7a, github.com/apache/spark/pull/6159 |
| |
| [SPARK-7698] Cache and reuse buffers in ExecutorMemoryAllocator when using heap allocation |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-20 16:37:11 -0700 |
| Commit: 7956dd7, github.com/apache/spark/pull/6227 |
| |
| [SPARK-7767] [STREAMING] Added test for checkpoint serialization in StreamingContext.start() |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-20 16:21:23 -0700 |
| Commit: 3c434cb, github.com/apache/spark/pull/6292 |
| |
| [SPARK-7237] [SPARK-7741] [CORE] [STREAMING] Clean more closures that need cleaning |
| Andrew Or <andrew@databricks.com> |
| 2015-05-20 15:39:32 -0700 |
| Commit: 9b84443, github.com/apache/spark/pull/6269 |
| |
| [SPARK-7511] [MLLIB] pyspark ml seed param should be random by default or 42 is quite funny but not very random |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-05-20 15:16:12 -0700 |
| Commit: 191ee47, github.com/apache/spark/pull/6139 |
| |
| Revert "[SPARK-7320] [SQL] Add Cube / Rollup for dataframe" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-20 13:39:04 -0700 |
| Commit: 6338c40 |
| |
| [SPARK-7579] [ML] [DOC] User guide update for OneHotEncoder |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-20 13:10:30 -0700 |
| Commit: 829f1d9, github.com/apache/spark/pull/6126 |
| |
| [SPARK-7537] [MLLIB] spark.mllib API updates |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-20 12:50:06 -0700 |
| Commit: 2ad4837, github.com/apache/spark/pull/6280 |
| |
| [SPARK-7713] [SQL] Use shared broadcast hadoop conf for partitioned table scan. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-20 11:23:40 -0700 |
| Commit: b631bf7, github.com/apache/spark/pull/6252 |
| |
| [SPARK-6094] [MLLIB] Add MultilabelMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-20 07:55:51 -0700 |
| Commit: 98a46f9, github.com/apache/spark/pull/6276 |
| |
| [SPARK-7654] [MLLIB] Migrate MLlib to the DataFrame reader/writer API |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-20 07:46:17 -0700 |
| Commit: 589b12f, github.com/apache/spark/pull/6281 |
| |
| [SPARK-7533] [YARN] Decrease spacing between AM-RM heartbeats. |
| ehnalis <zoltan.zvara@gmail.com> |
| 2015-05-20 08:27:39 -0500 |
| Commit: 3ddf051, github.com/apache/spark/pull/6082 |
| |
| [SPARK-7320] [SQL] Add Cube / Rollup for dataframe |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-20 19:09:47 +0800 |
| Commit: 09265ad, github.com/apache/spark/pull/6257 |
| |
| [SPARK-7663] [MLLIB] Add requirement for word2vec model |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-05-20 10:41:18 +0100 |
| Commit: b3abf0b, github.com/apache/spark/pull/6228 |
| |
| [SPARK-7656] [SQL] use CatalystConf in FunctionRegistry |
| scwf <wangfei1@huawei.com> |
| 2015-05-19 17:36:00 -0700 |
| Commit: 60336e3, github.com/apache/spark/pull/6164 |
| |
| [SPARK-7744] [DOCS] [MLLIB] Distributed matrix" section in MLlib "Data Types" documentation should be reordered. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-19 17:18:08 -0700 |
| Commit: 3860520, github.com/apache/spark/pull/6270 |
| |
| [SPARK-6246] [EC2] fixed support for more than 100 nodes |
| alyaxey <oleksii.sliusarenko@grammarly.com> |
| 2015-05-19 16:45:52 -0700 |
| Commit: 2bc5e06, github.com/apache/spark/pull/6267 |
| |
| [SPARK-7662] [SQL] Resolve correct names for generator in projection |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-19 15:20:46 -0700 |
| Commit: bcb1ff8, github.com/apache/spark/pull/6178 |
| |
| [SPARK-7738] [SQL] [PySpark] add reader and writer API in Python |
| Davies Liu <davies@databricks.com> |
| 2015-05-19 14:23:28 -0700 |
| Commit: 4de74d2, github.com/apache/spark/pull/6238 |
| |
| [SPARK-7652] [MLLIB] Update the implementation of naive Bayes prediction with BLAS |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-19 13:53:08 -0700 |
| Commit: c12dff9, github.com/apache/spark/pull/6189 |
| |
| [SPARK-7586] [ML] [DOC] Add docs of Word2Vec in ml package |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-05-19 13:43:48 -0700 |
| Commit: 68fb2a4, github.com/apache/spark/pull/6181 |
| |
| [SPARK-7726] Fix Scaladoc false errors |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-05-19 12:14:48 -0700 |
| Commit: 3c4c1f9, github.com/apache/spark/pull/6260 |
| |
| [SPARK-7678] [ML] Fix default random seed in HasSeed |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-19 10:57:47 -0700 |
| Commit: 7b16e9f, github.com/apache/spark/pull/6251 |
| |
| [SPARK-7047] [ML] ml.Model optional parent support |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-19 10:55:21 -0700 |
| Commit: fb90273, github.com/apache/spark/pull/5914 |
| |
| [SPARK-7704] Updating Programming Guides per SPARK-4397 |
| Dice <poleon.kd@gmail.com> |
| 2015-05-19 18:12:05 +0100 |
| Commit: 32fa611, github.com/apache/spark/pull/6234 |
| |
| [SPARK-7681] [MLLIB] remove mima excludes for 1.3 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-19 08:24:57 -0700 |
| Commit: 6845cb2, github.com/apache/spark/pull/6254 |
| |
| [SPARK-7723] Fix string interpolation in pipeline examples |
| Saleem Ansari <tuxdna@gmail.com> |
| 2015-05-19 10:31:11 +0100 |
| Commit: df34793, github.com/apache/spark/pull/6258 |
| |
| [HOTFIX] Revert "[SPARK-7092] Update spark scala version to 2.11.6" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 02:28:41 -0700 |
| Commit: 27fa88b |
| |
| Fixing a few basic typos in the Programming Guide. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-19 08:59:45 +0100 |
| Commit: 61f164d, github.com/apache/spark/pull/6240 |
| |
| [SPARK-7581] [ML] [DOC] User guide for spark.ml PolynomialExpansion |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-05-19 00:06:33 -0700 |
| Commit: 6008ec1, github.com/apache/spark/pull/6113 |
| |
| [HOTFIX] Fixing style failures in Kinesis source |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 00:02:06 -0700 |
| Commit: 23cf897 |
| |
| [HOTFIX]: Java 6 Build Breaks |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 06:00:13 +0000 |
| Commit: 9ebb44f |
| |
| [SPARK-7687] [SQL] DataFrame.describe() should cast all aggregates to String |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-18 21:53:44 -0700 |
| Commit: c9fa870, github.com/apache/spark/pull/6218 |
| |
| [SPARK-7150] SparkContext.range() and SQLContext.range() |
| Daoyuan Wang <daoyuan.wang@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-05-18 21:43:12 -0700 |
| Commit: c2437de, github.com/apache/spark/pull/6081 |
| |
| [SPARK-7681] [MLLIB] Add SparseVector support for gemv |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-18 21:32:36 -0700 |
| Commit: d03638c, github.com/apache/spark/pull/6209 |
| |
| [SPARK-7692] Updated Kinesis examples |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-18 18:24:15 -0700 |
| Commit: 3a60038, github.com/apache/spark/pull/6249 |
| |
| [SPARK-7621] [STREAMING] Report Kafka errors to StreamingListeners |
| jerluc <jeremyalucas@gmail.com> |
| 2015-05-18 18:13:29 -0700 |
| Commit: 0a7a94e, github.com/apache/spark/pull/6204 |
| |
| [SPARK-7624] Revert #4147 |
| Davies Liu <davies@databricks.com> |
| 2015-05-18 16:55:45 -0700 |
| Commit: 4fb52f9, github.com/apache/spark/pull/6172 |
| |
| [SQL] Fix serializability of ORC table scan |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-18 15:24:31 -0700 |
| Commit: eb4632f, github.com/apache/spark/pull/6247 |
| |
| [SPARK-7063] when lz4 compression is used, it causes core dump |
| Jihong MA <linlin200605@gmail.com> |
| 2015-05-18 22:47:50 +0100 |
| Commit: 6525fc0, github.com/apache/spark/pull/6226 |
| |
| [SPARK-7501] [STREAMING] DAG visualization: show DStream operations |
| Andrew Or <andrew@databricks.com> |
| 2015-05-18 14:33:33 -0700 |
| Commit: b93c97d, github.com/apache/spark/pull/6034 |
| |
| [HOTFIX] Fix ORC build break |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-18 14:04:04 -0700 |
| Commit: fcf90b7, github.com/apache/spark/pull/6244 |
| |
| [SPARK-7658] [STREAMING] [WEBUI] Update the mouse behaviors for the timeline graphs |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-18 13:34:43 -0700 |
| Commit: 0b6f503, github.com/apache/spark/pull/6168 |
| |
| [SPARK-6216] [PYSPARK] check python version of worker with driver |
| Davies Liu <davies@databricks.com> |
| 2015-05-18 12:55:13 -0700 |
| Commit: 32fbd29, github.com/apache/spark/pull/6203 |
| |
| [SPARK-7673] [SQL] WIP: HadoopFsRelation and ParquetRelation2 performance optimizations |
| Cheng Lian <lian@databricks.com> |
| 2015-05-18 12:45:37 -0700 |
| Commit: 9dadf01, github.com/apache/spark/pull/6225 |
| |
| [SPARK-7567] [SQL] [follow-up] Use a new flag to set output committer based on mapreduce apis |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-18 12:17:10 -0700 |
| Commit: 530397b, github.com/apache/spark/pull/6130 |
| |
| [SPARK-7269] [SQL] Incorrect analysis for aggregation(use semanticEquals) |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-18 12:08:28 -0700 |
| Commit: 103c863, github.com/apache/spark/pull/6173 |
| |
| [SPARK-7631] [SQL] treenode argString should not print children |
| scwf <wangfei1@huawei.com> |
| 2015-05-18 12:05:14 -0700 |
| Commit: fc2480e, github.com/apache/spark/pull/6144 |
| |
| [SPARK-2883] [SQL] ORC data source for Spark SQL |
| Zhan Zhang <zhazhan@gmail.com>, Cheng Lian <lian@databricks.com> |
| 2015-05-18 12:03:27 -0700 |
| Commit: aa31e43, github.com/apache/spark/pull/6194 |
| |
| [SPARK-7380] [MLLIB] pipeline stages should be copyable in Python |
| Xiangrui Meng <meng@databricks.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-18 12:02:18 -0700 |
| Commit: 9c7e802, github.com/apache/spark/pull/6088 |
| |
| [SQL] [MINOR] [THIS] use private for internal field in ScalaUdf |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-18 12:01:30 -0700 |
| Commit: 56ede88, github.com/apache/spark/pull/6235 |
| |
| [SPARK-7570] [SQL] Ignores _temporary during partition discovery |
| Cheng Lian <lian@databricks.com> |
| 2015-05-18 11:59:44 -0700 |
| Commit: 010a1c2, github.com/apache/spark/pull/6091 |
| |
| [SPARK-6888] [SQL] Make the jdbc driver handling user-definable |
| Rene Treffer <treffer@measite.de> |
| 2015-05-18 11:55:36 -0700 |
| Commit: e1ac2a9, github.com/apache/spark/pull/5555 |
| |
| [SPARK-7627] [SPARK-7472] DAG visualization: style skipped stages |
| Andrew Or <andrew@databricks.com> |
| 2015-05-18 10:59:35 -0700 |
| Commit: 563bfcc, github.com/apache/spark/pull/6171 |
| |
| [SPARK-7272] [MLLIB] User guide for PMML model export |
| Vincenzo Selvaggio <vselvaggio@hotmail.it> |
| 2015-05-18 08:46:33 -0700 |
| Commit: 814b3da, github.com/apache/spark/pull/6219 |
| |
| [SPARK-6657] [PYSPARK] Fix doc warnings |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-18 08:35:14 -0700 |
| Commit: 1ecfac6, github.com/apache/spark/pull/6221 |
| |
| [SPARK-7299][SQL] Set precision and scale for Decimal according to JDBC metadata instead of returned BigDecimal |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-18 01:10:55 -0700 |
| Commit: e32c0f6, github.com/apache/spark/pull/5833 |
| |
| [SPARK-7694] [MLLIB] Use getOrElse for getting the threshold of LR model |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-05-17 21:16:52 -0700 |
| Commit: 775e6f9, github.com/apache/spark/pull/6224 |
| |
| [SPARK-7693][Core] Remove "import scala.concurrent.ExecutionContext.Implicits.global" |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-17 20:37:19 -0700 |
| Commit: ff71d34, github.com/apache/spark/pull/6223 |
| |
| [SQL] [MINOR] use catalyst type converter in ScalaUdf |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-17 16:51:57 -0700 |
| Commit: 2f22424, github.com/apache/spark/pull/6182 |
| |
| [SPARK-6514] [SPARK-5960] [SPARK-6656] [SPARK-7679] [STREAMING] [KINESIS] Updates to the Kinesis API |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-17 16:49:07 -0700 |
| Commit: ca4257a, github.com/apache/spark/pull/6147 |
| |
| [SPARK-7491] [SQL] Allow configuration of classloader isolation for hive |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-17 12:43:15 -0700 |
| Commit: 2ca60ac, github.com/apache/spark/pull/6167 |
| |
| [SPARK-7686] [SQL] DescribeCommand is assigned wrong output attributes in SparkStrategies |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-17 11:59:28 -0700 |
| Commit: 5645628, github.com/apache/spark/pull/6217 |
| |
| [SPARK-7660] Wrap SnappyOutputStream to work around snappy-java bug |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-17 09:30:49 -0700 |
| Commit: f2cc6b5, github.com/apache/spark/pull/6176 |
| |
| [SPARK-7669] Builds against Hadoop 2.6+ get inconsistent curator dependā¦ |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-05-17 17:03:11 +0100 |
| Commit: 5021766, github.com/apache/spark/pull/6191 |
| |
| [SPARK-7447] [SQL] Don't re-merge Parquet schema when the relation is deserialized |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-17 15:42:21 +0800 |
| Commit: 3399055, github.com/apache/spark/pull/6012 |
| |
| [SQL] [MINOR] Skip unresolved expression for InConversion |
| scwf <wangfei1@huawei.com> |
| 2015-05-17 15:17:11 +0800 |
| Commit: edf09ea, github.com/apache/spark/pull/6145 |
| |
| [MINOR] Add 1.3, 1.3.1 to master branch EC2 scripts |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-17 00:12:20 -0700 |
| Commit: 1a7b9ce, github.com/apache/spark/pull/6215 |
| |
| [MINOR] [SQL] Removes an unreachable case clause |
| Cheng Lian <lian@databricks.com> |
| 2015-05-16 23:20:09 -0700 |
| Commit: ba4f8ca, github.com/apache/spark/pull/6214 |
| |
| [SPARK-7654][SQL] Move JDBC into DataFrame's reader/writer interface. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-16 22:01:53 -0700 |
| Commit: 517eb37, github.com/apache/spark/pull/6210 |
| |
| [SPARK-7655][Core] Deserializing value should not hold the TaskSchedulerImpl lock |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-16 21:03:22 -0700 |
| Commit: 3b6ef2c, github.com/apache/spark/pull/6195 |
| |
| [SPARK-7654][MLlib] Migrate MLlib to the DataFrame reader/writer API. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-16 15:03:57 -0700 |
| Commit: 161d0b4, github.com/apache/spark/pull/6211 |
| |
| [BUILD] update jblas dependency version to 1.2.4 |
| Matthew Brandyberry <mbrandy@us.ibm.com> |
| 2015-05-16 18:17:48 +0100 |
| Commit: 1b4e710, github.com/apache/spark/pull/6199 |
| |
| [HOTFIX] [SQL] Fixes DataFrameWriter.mode(String) |
| Cheng Lian <lian@databricks.com> |
| 2015-05-16 20:55:10 +0800 |
| Commit: ce63912, github.com/apache/spark/pull/6212 |
| |
| [SPARK-7655][Core][SQL] Remove 'scala.concurrent.ExecutionContext.Implicits.global' in 'ask' and 'BroadcastHashJoin' |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-16 00:44:29 -0700 |
| Commit: 47e7ffe, github.com/apache/spark/pull/6200 |
| |
| [SPARK-7672] [CORE] Use int conversion in translating kryoserializer.buffer.mb to kryoserializer.buffer |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-05-16 08:24:21 +0100 |
| Commit: 0ac8b01, github.com/apache/spark/pull/6198 |
| |
| [SPARK-4556] [BUILD] binary distribution assembly can't run in local mode |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-16 08:18:41 +0100 |
| Commit: 1fd3381, github.com/apache/spark/pull/6186 |
| |
| [SPARK-7671] Fix wrong URLs in MLlib Data Types Documentation |
| FavioVazquez <favio.vazquezp@gmail.com> |
| 2015-05-16 08:07:03 +0100 |
| Commit: d41ae43, github.com/apache/spark/pull/6196 |
| |
| [SPARK-7654][SQL] DataFrameReader and DataFrameWriter for input/output API |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-15 22:00:31 -0700 |
| Commit: 578bfee, github.com/apache/spark/pull/6175 |
| |
| [SPARK-7473] [MLLIB] Add reservoir sample in RandomForest |
| AiHe <ai.he@ussuning.com> |
| 2015-05-15 20:42:35 -0700 |
| Commit: deb4113, github.com/apache/spark/pull/5988 |
| |
| [SPARK-7543] [SQL] [PySpark] split dataframe.py into multiple files |
| Davies Liu <davies@databricks.com> |
| 2015-05-15 20:09:15 -0700 |
| Commit: d7b6994, github.com/apache/spark/pull/6201 |
| |
| [SPARK-7073] [SQL] [PySpark] Clean up SQL data type hierarchy in Python |
| Davies Liu <davies@databricks.com> |
| 2015-05-15 20:05:26 -0700 |
| Commit: adfd366, github.com/apache/spark/pull/6206 |
| |
| [SPARK-7575] [ML] [DOC] Example code for OneVsRest |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-15 19:33:20 -0700 |
| Commit: cc12a86, github.com/apache/spark/pull/6115 |
| |
| [SPARK-7563] OutputCommitCoordinator.stop() should only run on the driver |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-15 18:06:01 -0700 |
| Commit: 2c04c8a, github.com/apache/spark/pull/6197 |
| |
| [SPARK-7676] Bug fix and cleanup of stage timeline view |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-15 17:45:14 -0700 |
| Commit: e745456, github.com/apache/spark/pull/6202 |
| |
| [SPARK-7556] [ML] [DOC] Add user guide for spark.ml Binarizer, including Scala, Java and Python examples |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-15 15:05:04 -0700 |
| Commit: c869633, github.com/apache/spark/pull/6116 |
| |
| [SPARK-7677] [STREAMING] Add Kafka modules to the 2.11 build. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-05-15 14:57:29 -0700 |
| Commit: 6e77105, github.com/apache/spark/pull/6149 |
| |
| [SPARK-7226] [SPARKR] Support math functions in R DataFrame |
| qhuang <qian.huang@intel.com> |
| 2015-05-15 14:06:16 -0700 |
| Commit: 50da9e8, github.com/apache/spark/pull/6170 |
| |
| [SPARK-7296] Add timeline visualization for stages in the UI. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 13:54:09 -0700 |
| Commit: 9b6cf28, github.com/apache/spark/pull/5843 |
| |
| [SPARK-7504] [YARN] NullPointerException when initializing SparkContext in YARN-cluster mode |
| ehnalis <zoltan.zvara@gmail.com> |
| 2015-05-15 12:14:02 -0700 |
| Commit: 8e3822a, github.com/apache/spark/pull/6083 |
| |
| [SPARK-7664] [WEBUI] DAG visualization: Fix incorrect link paths of DAG. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 11:54:13 -0700 |
| Commit: ad92af9, github.com/apache/spark/pull/6184 |
| |
| [SPARK-5412] [DEPLOY] Cannot bind Master to a specific hostname as per the documentation |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-15 11:30:19 -0700 |
| Commit: 8ab1450, github.com/apache/spark/pull/6185 |
| |
| [CORE] Protect additional test vars from early GC |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-15 11:27:24 -0700 |
| Commit: 270d4b5, github.com/apache/spark/pull/6187 |
| |
| [SPARK-7233] [CORE] Detect REPL mode once |
| Oleksii Kostyliev <etander@gmail.com>, Oleksii Kostyliev <okostyliev@thunderhead.com> |
| 2015-05-15 11:19:56 -0700 |
| Commit: b1b9d58, github.com/apache/spark/pull/5835 |
| |
| [SPARK-7651] [MLLIB] [PYSPARK] GMM predict, predictSoft should raise error on bad input |
| FlytxtRnD <meethu.mathew@flytxt.com> |
| 2015-05-15 10:43:18 -0700 |
| Commit: 8f4aaba, github.com/apache/spark/pull/6180 |
| |
| [SPARK-7668] [MLLIB] Preserve isTransposed property for Matrix after calling map function |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-15 10:03:29 -0700 |
| Commit: f96b85a, github.com/apache/spark/pull/6188 |
| |
| [SPARK-7503] [YARN] Resources in .sparkStaging directory can't be cleaned up on error |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 11:37:34 +0100 |
| Commit: c64ff80, github.com/apache/spark/pull/6026 |
| |
| [SPARK-7591] [SQL] Partitioning support API tweaks |
| Cheng Lian <lian@databricks.com> |
| 2015-05-15 16:20:49 +0800 |
| Commit: fdf5bba, github.com/apache/spark/pull/6150 |
| |
| [SPARK-6258] [MLLIB] GaussianMixture Python API parity check |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-15 00:18:39 -0700 |
| Commit: 9476148, github.com/apache/spark/pull/6087 |
| |
| [SPARK-7650] [STREAMING] [WEBUI] Move streaming css and js files to the streaming project |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 23:51:41 -0700 |
| Commit: cf842d4, github.com/apache/spark/pull/6160 |
| |
| [CORE] Remove unreachable Heartbeat message from Worker |
| Kan Zhang <kzhang@apache.org> |
| 2015-05-14 23:50:50 -0700 |
| Commit: daf4ae7, github.com/apache/spark/pull/6163 |
| |
| [HOTFIX] Add workaround for SPARK-7660 to fix JavaAPISuite failures. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-14 23:17:41 -0700 |
| Commit: 7da33ce |
| |
| [SQL] When creating partitioned table scan, explicitly create UnionRDD. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-15 12:04:26 +0800 |
| Commit: e8f0e01, github.com/apache/spark/pull/6162 |
| |
| [SPARK-7098][SQL] Make the WHERE clause with timestamp show consistent result |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-14 20:49:21 -0700 |
| Commit: f9705d4, github.com/apache/spark/pull/5682 |
| |
| [SPARK-7548] [SQL] Add explode function for DataFrames |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-14 19:49:44 -0700 |
| Commit: 6d0633e, github.com/apache/spark/pull/6107 |
| |
| [SPARK-7619] [PYTHON] fix docstring signature |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 18:16:22 -0700 |
| Commit: 48fc38f, github.com/apache/spark/pull/6161 |
| |
| [SPARK-7648] [MLLIB] Add weights and intercept to GLM wrappers in spark.ml |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 18:13:58 -0700 |
| Commit: 723853e, github.com/apache/spark/pull/6156 |
| |
| [SPARK-7645] [STREAMING] [WEBUI] Show milliseconds in the UI if the batch interval < 1 second |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 16:58:36 -0700 |
| Commit: b208f99, github.com/apache/spark/pull/6154 |
| |
| [SPARK-7649] [STREAMING] [WEBUI] Use window.localStorage to store the status rather than the url |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 16:57:33 -0700 |
| Commit: 0a317c1, github.com/apache/spark/pull/6158 |
| |
| [SPARK-7643] [UI] use the correct size in RDDPage for storage info and partitions |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 16:56:32 -0700 |
| Commit: 57ed16c, github.com/apache/spark/pull/6157 |
| |
| [SPARK-7598] [DEPLOY] Add aliveWorkers metrics in Master |
| Rex Xiong <pengx@microsoft.com> |
| 2015-05-14 16:55:31 -0700 |
| Commit: 93dbb3a, github.com/apache/spark/pull/6117 |
| |
| Make SPARK prefix a variable |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-14 15:26:35 -0700 |
| Commit: 11a1a13, github.com/apache/spark/pull/6153 |
| |
| [SPARK-7278] [PySpark] DateType should find datetime.datetime acceptable |
| ksonj <kson@siberie.de> |
| 2015-05-14 15:10:58 -0700 |
| Commit: 5d7d4f8, github.com/apache/spark/pull/6057 |
| |
| [SQL][minor] rename apply for QueryPlanner |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-14 10:25:18 -0700 |
| Commit: f2cd00b, github.com/apache/spark/pull/6142 |
| |
| [SPARK-7249] Updated Hadoop dependencies due to inconsistency in the versions |
| FavioVazquez <favio.vazquezp@gmail.com> |
| 2015-05-14 15:22:58 +0100 |
| Commit: 7fb715d, github.com/apache/spark/pull/5786 |
| |
| [SPARK-7568] [ML] ml.LogisticRegression doesn't output the right prediction |
| DB Tsai <dbt@netflix.com> |
| 2015-05-14 01:26:08 -0700 |
| Commit: c1080b6, github.com/apache/spark/pull/6109 |
| |
| [SPARK-7407] [MLLIB] use uid + name to identify parameters |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 01:22:15 -0700 |
| Commit: 1b8625f, github.com/apache/spark/pull/6019 |
| |
| [SPARK-7595] [SQL] Window will cause resolve failed with self join |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-14 00:23:27 -0700 |
| Commit: 13e652b, github.com/apache/spark/pull/6114 |
| |
| [SPARK-7620] [ML] [MLLIB] Removed calling size, length in while condition to avoid extra JVM call |
| DB Tsai <dbt@netflix.com> |
| 2015-05-13 22:23:21 -0700 |
| Commit: d3db2fd, github.com/apache/spark/pull/6137 |
| |
| [SPARK-7612] [MLLIB] update NB training to use mllib's BLAS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-13 21:27:17 -0700 |
| Commit: d5f18de, github.com/apache/spark/pull/6128 |
| |
| [HOT FIX #6125] Do not wait for all stages to start rendering |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 21:04:13 -0700 |
| Commit: 3113da9, github.com/apache/spark/pull/6138 |
| |
| [HOTFIX] Use 'new Job' in fsBasedParquet.scala |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 17:58:29 -0700 |
| Commit: 728af88, github.com/apache/spark/pull/6136 |
| |
| [HOTFIX] Bug in merge script |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-13 17:55:06 -0700 |
| Commit: 32e27df |
| |
| [SPARK-6752] [STREAMING] [REVISED] Allow StreamingContext to be recreated from checkpoint and existing SparkContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-13 17:33:15 -0700 |
| Commit: bce00da, github.com/apache/spark/pull/6096 |
| |
| [SPARK-7601] [SQL] Support Insert into JDBC Datasource |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-05-13 17:24:04 -0700 |
| Commit: 59aaa1d, github.com/apache/spark/pull/6121 |
| |
| [SPARK-7081] Faster sort-based shuffle path using binary processing cache-aware sort |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-13 17:07:31 -0700 |
| Commit: 73bed40, github.com/apache/spark/pull/5868 |
| |
| [SPARK-7356] [STREAMING] Fix flakey tests in FlumePollingStreamSuite using SparkSink's batch CountDownLatch. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-13 16:43:30 -0700 |
| Commit: 61d1e87, github.com/apache/spark/pull/5918 |
| |
| [STREAMING] [MINOR] Keep streaming.UIUtils private |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:31:24 -0700 |
| Commit: bb6dec3, github.com/apache/spark/pull/6134 |
| |
| [SPARK-7502] DAG visualization: gracefully handle removed stages |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:29:52 -0700 |
| Commit: aa18378, github.com/apache/spark/pull/6132 |
| |
| [SPARK-7464] DAG visualization: highlight the same RDDs on hover |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:29:10 -0700 |
| Commit: 4440341, github.com/apache/spark/pull/6100 |
| |
| [SPARK-7399] Spark compilation error for scala 2.11 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:28:37 -0700 |
| Commit: f88ac70, github.com/apache/spark/pull/6129 |
| |
| [SPARK-7608] Clean up old state in RDDOperationGraphListener |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:27:48 -0700 |
| Commit: f6e1838, github.com/apache/spark/pull/6125 |
| |
| [SQL] Move some classes into packages that are more appropriate. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-13 16:15:31 -0700 |
| Commit: e683182, github.com/apache/spark/pull/6108 |
| |
| [SPARK-7303] [SQL] push down project if possible when the child is sort |
| scwf <wangfei1@huawei.com> |
| 2015-05-13 16:13:48 -0700 |
| Commit: 59250fe, github.com/apache/spark/pull/5838 |
| |
| [SPARK-7382] [MLLIB] Feature Parity in PySpark for ml.classification |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-13 15:13:09 -0700 |
| Commit: df2fb13, github.com/apache/spark/pull/6106 |
| |
| [SPARK-7545] [MLLIB] Added check in Bernoulli Naive Bayes to make sure that both training and predict features have values of 0 or 1 |
| leahmcguire <lmcguire@salesforce.com> |
| 2015-05-13 14:13:19 -0700 |
| Commit: 61e05fc, github.com/apache/spark/pull/6073 |
| |
| [SPARK-7593] [ML] Python Api for ml.feature.Bucketizer |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-13 13:21:36 -0700 |
| Commit: 5db18ba, github.com/apache/spark/pull/6124 |
| |
| [MINOR] [CORE] Accept alternative mesos unsatisfied link error in test. |
| Tim Ellison <tellison@users.noreply.github.com>, Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-13 21:16:32 +0100 |
| Commit: 51030b8, github.com/apache/spark/pull/6119 |
| |
| [MINOR] Enhance SizeEstimator to detect IBM compressed refs and s390 ā¦ |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-13 21:01:42 +0100 |
| Commit: 3cd9ad2, github.com/apache/spark/pull/6085 |
| |
| [MINOR] Avoid passing the PermGenSize option to IBM JVMs. |
| Tim Ellison <t.p.ellison@gmail.com>, Tim Ellison <tellison@users.noreply.github.com> |
| 2015-05-13 21:00:12 +0100 |
| Commit: e676fc0, github.com/apache/spark/pull/6055 |
| |
| [SPARK-7551][DataFrame] support backticks for DataFrame attribute resolution |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-13 12:47:48 -0700 |
| Commit: 213a6f3, github.com/apache/spark/pull/6074 |
| |
| [SPARK-7567] [SQL] Migrating Parquet data source to FSBasedRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 11:04:10 -0700 |
| Commit: 7ff16e8, github.com/apache/spark/pull/6090 |
| |
| [SPARK-7589] [STREAMING] [WEBUI] Make "Input Rate" in the Streaming page consistent with other pages |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 10:01:26 -0700 |
| Commit: bec938f, github.com/apache/spark/pull/6102 |
| |
| [SPARK-6734] [SQL] Add UDTF.close support in Generate |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-14 00:14:59 +0800 |
| Commit: 0da254f, github.com/apache/spark/pull/5383 |
| |
| [MINOR] [SQL] Removes debugging println |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 23:40:13 +0800 |
| Commit: aa6ba3f, github.com/apache/spark/pull/6123 |
| |
| [SQL] In InsertIntoFSBasedRelation.insert, log cause before abort job/task. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-13 23:36:19 +0800 |
| Commit: b061bd5, github.com/apache/spark/pull/6105 |
| |
| [SPARK-7599] [SQL] Don't restrict customized output committers to be subclasses of FileOutputCommitter |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 07:35:55 -0700 |
| Commit: 10c546e, github.com/apache/spark/pull/6118 |
| |
| [SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has space in its path |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp>, Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-13 09:43:40 +0100 |
| Commit: 50c7270, github.com/apache/spark/pull/5447 |
| |
| [SPARK-7526] [SPARKR] Specify ip of RBackend, MonitorServer and RRDD Socket server |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-12 23:55:44 -0700 |
| Commit: 98195c3, github.com/apache/spark/pull/6053 |
| |
| [SPARK-7482] [SPARKR] Rename some DataFrame API methods in SparkR to match their counterparts in Scala. |
| Sun Rui <rui.sun@intel.com> |
| 2015-05-12 23:52:30 -0700 |
| Commit: df9b94a, github.com/apache/spark/pull/6007 |
| |
| [SPARK-7566][SQL] Add type to HiveContext.analyzer |
| Santiago M. Mola <santi@mola.io> |
| 2015-05-12 23:44:21 -0700 |
| Commit: 208b902, github.com/apache/spark/pull/6086 |
| |
| [SPARK-7321][SQL] Add Column expression for conditional statements (when/otherwise) |
| Reynold Xin <rxin@databricks.com>, kaka1992 <kaka_1992@163.com> |
| 2015-05-12 21:43:34 -0700 |
| Commit: 97dee31, github.com/apache/spark/pull/6072 |
| |
| [SPARK-7588] Document all SQL/DataFrame public methods with @since tag |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-12 18:37:02 -0700 |
| Commit: 8fd5535, github.com/apache/spark/pull/6101 |
| |
| [SPARK-7592] Always set resolution to "Fixed" in PR merge script. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-12 18:20:54 -0700 |
| Commit: 1b9e434, github.com/apache/spark/pull/6103 |
| |
| [HOTFIX] Use the old Job API to support old Hadoop versions |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 08:33:24 +0800 |
| Commit: 247b703, github.com/apache/spark/pull/6095 |
| |
| [SPARK-7572] [MLLIB] do not import Param/Params under pyspark.ml |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 17:15:39 -0700 |
| Commit: 77f64c7, github.com/apache/spark/pull/6094 |
| |
| [SPARK-7554] [STREAMING] Throw exception when an active/stopped StreamingContext is used to create DStreams and output operations |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 17:07:21 -0700 |
| Commit: 23f7d66, github.com/apache/spark/pull/6099 |
| |
| [SPARK-7528] [MLLIB] make RankingMetrics Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 16:53:47 -0700 |
| Commit: 2713bc6, github.com/apache/spark/pull/6098 |
| |
| [SPARK-7553] [STREAMING] Added methods to maintain a singleton StreamingContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 16:44:14 -0700 |
| Commit: 00e7b09, github.com/apache/spark/pull/6070 |
| |
| [SPARK-7573] [ML] OneVsRest cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-12 16:42:30 -0700 |
| Commit: 96c4846, github.com/apache/spark/pull/6097 |
| |
| [SPARK-7557] [ML] [DOC] User guide for spark.ml HashingTF, Tokenizer |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-12 16:39:56 -0700 |
| Commit: f0c1bc3, github.com/apache/spark/pull/6093 |
| |
| [SPARK-7496] [MLLIB] Update Programming guide with Online LDA |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-05-12 15:12:29 -0700 |
| Commit: 1d70366, github.com/apache/spark/pull/6046 |
| |
| [SPARK-7406] [STREAMING] [WEBUI] Add tooltips for "Scheduling Delay", "Processing Time" and "Total Delay" |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-12 14:41:21 -0700 |
| Commit: 1422e79, github.com/apache/spark/pull/5952 |
| |
| [SPARK-7571] [MLLIB] rename Math to math |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 14:39:03 -0700 |
| Commit: a4874b0, github.com/apache/spark/pull/6092 |
| |
| [SPARK-7484][SQL]Support jdbc connection properties |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-05-12 14:37:23 -0700 |
| Commit: 455551d, github.com/apache/spark/pull/6009 |
| |
| [SPARK-7559] [MLLIB] Bucketizer should include the right most boundary in the last bucket. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 14:24:26 -0700 |
| Commit: 23b9863, github.com/apache/spark/pull/6075 |
| |
| [SPARK-7569][SQL] Better error for invalid binary expressions |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-12 13:36:55 -0700 |
| Commit: 2a41c0d, github.com/apache/spark/pull/6089 |
| |
| [SPARK-7015] [MLLIB] [WIP] Multiclass to Binary Reduction: One Against All |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-12 13:35:12 -0700 |
| Commit: 595a675, github.com/apache/spark/pull/5830 |
| |
| [SPARK-2018] [CORE] Upgrade LZF library to fix endian serialization pā¦ |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-12 20:48:26 +0100 |
| Commit: 5438f49, github.com/apache/spark/pull/6077 |
| |
| [SPARK-7487] [ML] Feature Parity in PySpark for ml.regression |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-12 12:17:05 -0700 |
| Commit: 8e935b0, github.com/apache/spark/pull/6016 |
| |
| [HOT FIX #6076] DAG visualization: curve the edges |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 12:06:30 -0700 |
| Commit: b9b01f4 |
| |
| [SPARK-7276] [DATAFRAME] speed up DataFrame.select by collapsing Project |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 11:51:55 -0700 |
| Commit: 4e29052, github.com/apache/spark/pull/5831 |
| |
| [SPARK-7500] DAG visualization: move cluster labeling to dagre-d3 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 11:17:59 -0700 |
| Commit: 65697bb, github.com/apache/spark/pull/6076 |
| |
| [DataFrame][minor] support column in field accessor |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 10:37:57 -0700 |
| Commit: bfcaf8a, github.com/apache/spark/pull/6080 |
| |
| [SPARK-3928] [SPARK-5182] [SQL] Partitioning support for the data sources API |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 01:32:28 +0800 |
| Commit: 0595b6d, github.com/apache/spark/pull/5526 |
| |
| [DataFrame][minor] cleanup unapply methods in DataTypes |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 10:28:40 -0700 |
| Commit: 831504c, github.com/apache/spark/pull/6079 |
| |
| [SPARK-6876] [PySpark] [SQL] add DataFrame na.replace in pyspark |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-12 10:23:41 -0700 |
| Commit: d86ce84, github.com/apache/spark/pull/6003 |
| |
| [SPARK-7532] [STREAMING] StreamingContext.start() made to logWarning and not throw exception |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 08:48:24 -0700 |
| Commit: ec6f2a9, github.com/apache/spark/pull/6060 |
| |
| [SPARK-7467] Dag visualization: treat checkpoint as an RDD operation |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 01:40:55 -0700 |
| Commit: f3e8e60, github.com/apache/spark/pull/6004 |
| |
| [SPARK-7485] [BUILD] Remove pyspark files from assembly. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-12 01:39:21 -0700 |
| Commit: 82e890f, github.com/apache/spark/pull/6022 |
| |
| [MINOR] [PYSPARK] Set PYTHONPATH to python/lib/pyspark.zip rather than python/pyspark |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-12 01:36:27 -0700 |
| Commit: 9847875, github.com/apache/spark/pull/6047 |
| |
| [SPARK-7534] [CORE] [WEBUI] Fix the Stage table when a stage is missing |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-12 01:34:33 -0700 |
| Commit: 8a4edec, github.com/apache/spark/pull/6061 |
| |
| [SPARK-6994][SQL] Update docs for fetching Row fields by name |
| vidmantas zemleris <vidmantas@vinted.com> |
| 2015-05-11 22:29:24 -0700 |
| Commit: 640f63b, github.com/apache/spark/pull/6030 |
| |
| [SQL] Rename Dialect -> ParserDialect. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 22:06:56 -0700 |
| Commit: 1669675, github.com/apache/spark/pull/6071 |
| |
| [SPARK-7435] [SPARKR] Make DataFrame.show() consistent with that of Scala and pySpark |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-05-11 21:02:34 -0700 |
| Commit: b94a933, github.com/apache/spark/pull/5989 |
| |
| [SPARK-7509][SQL] DataFrame.drop in Python for dropping columns. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 20:04:36 -0700 |
| Commit: 028ad4b, github.com/apache/spark/pull/6068 |
| |
| [SPARK-7437] [SQL] Fold "literal in (item1, item2, ..., literal, ...)" into true or false directly |
| Zhongshuai Pei <799203320@qq.com>, DoingDone9 <799203320@qq.com> |
| 2015-05-11 19:22:44 -0700 |
| Commit: 4b5e1fe, github.com/apache/spark/pull/5972 |
| |
| [SPARK-7411] [SQL] Support SerDe for HiveQl in CTAS |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-11 19:21:16 -0700 |
| Commit: e35d878, github.com/apache/spark/pull/5963 |
| |
| [SPARK-7324] [SQL] DataFrame.dropDuplicates |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 19:15:14 -0700 |
| Commit: b6bf4f7, github.com/apache/spark/pull/6066 |
| |
| [SPARK-7530] [STREAMING] Added StreamingContext.getState() to expose the current state of the context |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-11 18:53:50 -0700 |
| Commit: f9c7580, github.com/apache/spark/pull/6058 |
| |
| [SPARK-5893] [ML] Add bucketizer |
| Xusen Yin <yinxusen@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-11 18:41:22 -0700 |
| Commit: 35fb42a, github.com/apache/spark/pull/5980 |
| |
| Updated DataFrame.saveAsTable Hive warning to include SPARK-7550 ticket. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 18:10:45 -0700 |
| Commit: 87229c9, github.com/apache/spark/pull/6067 |
| |
| [SPARK-7462][SQL] Update documentation for retaining grouping columns in DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 18:07:12 -0700 |
| Commit: 3a9b699, github.com/apache/spark/pull/6062 |
| |
| [SPARK-7084] improve saveAsTable documentation |
| madhukar <phatak.dev@gmail.com> |
| 2015-05-11 17:04:11 -0700 |
| Commit: 57255dc, github.com/apache/spark/pull/5654 |
| |
| [SQL] Show better error messages for incorrect join types in DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 17:02:11 -0700 |
| Commit: 4f4dbb0, github.com/apache/spark/pull/6064 |
| |
| [MINOR] [DOCS] Fix the link to test building info on the wiki |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-12 00:25:43 +0100 |
| Commit: 91dc3df, github.com/apache/spark/pull/6063 |
| |
| Update Documentation: leftsemi instead of semijoin |
| LCY Vincent <lauchunyin@gmail.com> |
| 2015-05-11 14:48:10 -0700 |
| Commit: a8ea096, github.com/apache/spark/pull/5944 |
| |
| [STREAMING] [MINOR] Close files correctly when iterator is finished in streaming WAL recovery |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-11 14:38:58 -0700 |
| Commit: 25c01c5, github.com/apache/spark/pull/6050 |
| |
| [SPARK-7516] [Minor] [DOC] Replace depreciated inferSchema() with createDataFrame() |
| gchen <chenguancheng@gmail.com> |
| 2015-05-11 14:37:18 -0700 |
| Commit: 8e67433, github.com/apache/spark/pull/6041 |
| |
| [SPARK-7515] [DOC] Update documentation for PySpark on YARN with cluster mode |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-11 14:19:11 -0700 |
| Commit: 6e9910c, github.com/apache/spark/pull/6040 |
| |
| [SPARK-7508] JettyUtils-generated servlets to log & report all errors |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-05-11 13:35:06 -0700 |
| Commit: 7ce2a33, github.com/apache/spark/pull/6033 |
| |
| [SPARK-6470] [YARN] Add support for YARN node labels. |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-11 12:09:39 -0700 |
| Commit: 82fee9d, github.com/apache/spark/pull/5242 |
| |
| [SPARK-7462] By default retain group by columns in aggregate |
| Reynold Xin <rxin@databricks.com>, Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-11 11:35:16 -0700 |
| Commit: 0a4844f, github.com/apache/spark/pull/5996 |
| |
| [SPARK-7361] [STREAMING] Throw unambiguous exception when attempting to start multiple StreamingContexts in the same JVM |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-11 10:58:56 -0700 |
| Commit: 1b46556, github.com/apache/spark/pull/5907 |
| |
| [SPARK-7522] [EXAMPLES] Removed angle brackets from dataFormat option |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-05-11 09:23:47 -0700 |
| Commit: 4f8a155, github.com/apache/spark/pull/6049 |
| |
| [SPARK-6092] [MLLIB] Add RankingMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-11 09:14:20 -0700 |
| Commit: 042dda3, github.com/apache/spark/pull/6044 |
| |
| [SPARK-7326] [STREAMING] Performing window() on a WindowedDStream doesn't work all the time |
| Wesley Miao <wesley.miao@gmail.com>, Wesley <wesley.miao@autodesk.com> |
| 2015-05-11 12:20:06 +0100 |
| Commit: d70a076, github.com/apache/spark/pull/5871 |
| |
| [SPARK-7519] [SQL] fix minor bugs in thrift server UI |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-05-11 14:08:15 +0800 |
| Commit: 2242ab3, github.com/apache/spark/pull/6048 |
| |
| [SPARK-7512] [SPARKR] Fix RDD's show method to use getJRDD |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-10 19:49:42 -0700 |
| Commit: 0835f1e, github.com/apache/spark/pull/6035 |
| |
| [SPARK-7427] [PYSPARK] Make sharedParams match in Scala, Python |
| Glenn Weidner <gweidner@us.ibm.com> |
| 2015-05-10 19:18:32 -0700 |
| Commit: c5aca0c, github.com/apache/spark/pull/6023 |
| |
| [SPARK-5521] PCA wrapper for easy transform vectors |
| Kirill A. Korinskiy <catap@catap.ru>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-10 13:34:00 -0700 |
| Commit: 8c07c75, github.com/apache/spark/pull/4304 |
| |
| [SPARK-7431] [ML] [PYTHON] Made CrossValidatorModel call parent init in PySpark |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-10 13:29:27 -0700 |
| Commit: 3038443, github.com/apache/spark/pull/5968 |
| |
| [MINOR] [SQL] Fixes variable name typo |
| Cheng Lian <lian@databricks.com> |
| 2015-05-10 21:26:36 +0800 |
| Commit: 6bf9352, github.com/apache/spark/pull/6038 |
| |
| [SPARK-7345][SQL] Spark cannot detect renamed columns using JDBC connector |
| Oleg Sidorkin <oleg.sidorkin@gmail.com> |
| 2015-05-10 01:31:34 -0700 |
| Commit: d7a37bc, github.com/apache/spark/pull/6032 |
| |
| [SPARK-6091] [MLLIB] Add MulticlassMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-10 00:57:14 -0700 |
| Commit: bf7e81a, github.com/apache/spark/pull/6011 |
| |
| [SPARK-7475] [MLLIB] adjust ldaExample for online LDA |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-05-09 15:40:46 -0700 |
| Commit: b13162b, github.com/apache/spark/pull/6000 |
| |
| [BUILD] Reference fasterxml.jackson.version in sql/core/pom.xml |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-09 13:19:07 -0700 |
| Commit: bd74301, github.com/apache/spark/pull/6031 |
| |
| Upgrade version of jackson-databind in sql/core/pom.xml |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-09 10:41:30 -0700 |
| Commit: 3071aac, github.com/apache/spark/pull/6028 |
| |
| [STREAMING] [DOCS] Fix wrong url about API docs of StreamingListener |
| dobashim <dobashim@oss.nttdata.co.jp> |
| 2015-05-09 10:14:46 +0100 |
| Commit: 7d0f172, github.com/apache/spark/pull/6024 |
| |
| [SPARK-7403] [WEBUI] Link URL in objects on Timeline View is wrong in case of running on YARN |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-09 10:10:29 +0100 |
| Commit: 12b95ab, github.com/apache/spark/pull/5947 |
| |
| [SPARK-7438] [SPARK CORE] Fixed validation of relativeSD in countApproxDistinct |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-05-09 10:03:15 +0100 |
| Commit: dda6d9f, github.com/apache/spark/pull/5974 |
| |
| [SPARK-7498] [ML] removed varargs annotation from Params.setDefaults |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-08 21:55:54 -0700 |
| Commit: 2992623, github.com/apache/spark/pull/6021 |
| |
| [SPARK-7262] [ML] Binary LogisticRegression with L1/L2 (elastic net) using OWLQN in new ML package |
| DB Tsai <dbt@netflix.com> |
| 2015-05-08 21:43:05 -0700 |
| Commit: 86ef4cf, github.com/apache/spark/pull/5967 |
| |
| [SPARK-7375] [SQL] Avoid row copying in exchange when sort.serializeMapOutputs takes effect |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-08 22:09:55 -0400 |
| Commit: cde5483, github.com/apache/spark/pull/5948 |
| |
| [SPARK-7231] [SPARKR] Changes to make SparkR DataFrame dplyr friendly. |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-08 18:29:57 -0700 |
| Commit: 0a901dd, github.com/apache/spark/pull/6005 |
| |
| [SPARK-7451] [YARN] Preemption of executors is counted as failure causing Spark job to fail |
| Ashwin Shankar <ashankar@netflix.com> |
| 2015-05-08 17:51:00 -0700 |
| Commit: b6c797b, github.com/apache/spark/pull/5993 |
| |
| [SPARK-7488] [ML] Feature Parity in PySpark for ml.recommendation |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-08 17:24:32 -0700 |
| Commit: 84bf931, github.com/apache/spark/pull/6015 |
| |
| [SPARK-7237] Clean function in several RDD methods |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-08 17:16:38 -0700 |
| Commit: 54e6fa0, github.com/apache/spark/pull/5959 |
| |
| [SPARK-7469] [SQL] DAG visualization: show SQL query operators |
| Andrew Or <andrew@databricks.com> |
| 2015-05-08 17:15:10 -0700 |
| Commit: bd61f07, github.com/apache/spark/pull/5999 |
| |
| [SPARK-6955] Perform port retries at NettyBlockTransferService level |
| Aaron Davidson <aaron@databricks.com> |
| 2015-05-08 17:13:55 -0700 |
| Commit: ffdc40c, github.com/apache/spark/pull/5575 |
| |
| updated ec2 instance types |
| Brendan Collins <bcollins@blueraster.com> |
| 2015-05-08 15:59:34 -0700 |
| Commit: 1c78f68, github.com/apache/spark/pull/6014 |
| |
| [SPARK-5913] [MLLIB] Python API for ChiSqSelector |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-08 15:48:39 -0700 |
| Commit: 35c9599, github.com/apache/spark/pull/5939 |
| |
| [SPARK-4699] [SQL] Make caseSensitive configurable in spark sql analyzer |
| Jacky Li <jacky.likun@huawei.com>, wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-05-08 15:25:54 -0700 |
| Commit: 6dad76e, github.com/apache/spark/pull/5806 |
| |
| [SPARK-7390] [SQL] Only merge other CovarianceCounter when its count is greater than zero |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-08 14:41:16 -0700 |
| Commit: 90527f5, github.com/apache/spark/pull/5931 |
| |
| [SPARK-7378] [CORE] Handle deep links to unloaded apps. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-08 14:12:58 -0700 |
| Commit: 5467c34, github.com/apache/spark/pull/5922 |
| |
| [MINOR] [CORE] Allow History Server to read kerberos opts from config file. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-08 14:10:27 -0700 |
| Commit: 9042f8f, github.com/apache/spark/pull/5998 |
| |
| [SPARK-7466] DAG visualization: fix orphan nodes |
| Andrew Or <andrew@databricks.com> |
| 2015-05-08 14:09:39 -0700 |
| Commit: 3b0c5e7, github.com/apache/spark/pull/6002 |
| |
| [MINOR] Defeat early garbage collection of test suite variable |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-08 14:08:52 -0700 |
| Commit: 31da40d, github.com/apache/spark/pull/6010 |
| |
| [SPARK-7489] [SPARK SHELL] Spark shell crashes when compiled with scala 2.11 |
| vinodkc <vinod.kc.in@gmail.com> |
| 2015-05-08 14:07:53 -0700 |
| Commit: 4e7360e, github.com/apache/spark/pull/6013 |
| |
| [WEBUI] Remove debug feature for vis.js |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-08 14:06:37 -0700 |
| Commit: c45c09b, github.com/apache/spark/pull/5994 |
| |
| [MINOR] Ignore python/lib/pyspark.zip |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-08 14:06:02 -0700 |
| Commit: dc71e47, github.com/apache/spark/pull/6017 |
| |
| [SPARK-7490] [CORE] [Minor] MapOutputTracker.deserializeMapStatuses: close input streams |
| Evan Jones <ejones@twitter.com> |
| 2015-05-08 22:00:39 +0100 |
| Commit: 25889d8, github.com/apache/spark/pull/5982 |
| |
| [SPARK-6627] Finished rename to ShuffleBlockResolver |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-08 12:24:06 -0700 |
| Commit: 4b3bb0e, github.com/apache/spark/pull/5764 |
| |
| [SPARK-7133] [SQL] Implement struct, array, and map field accessor |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-08 11:49:38 -0700 |
| Commit: 2d05f32, github.com/apache/spark/pull/5744 |
| |
| [SPARK-7298] Harmonize style of new visualizations |
| Matei Zaharia <matei@databricks.com> |
| 2015-05-08 14:41:42 -0400 |
| Commit: a1ec08f, github.com/apache/spark/pull/5942 |
| |
| [SPARK-7436] Fixed instantiation of custom recovery mode factory and added tests |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-05-08 11:36:30 -0700 |
| Commit: 35d6a99, github.com/apache/spark/pull/5977 |
| |
| [SPARK-6824] Fill the docs for DataFrame API in SparkR |
| hqzizania <qian.huang@intel.com>, qhuang <qian.huang@intel.com> |
| 2015-05-08 11:25:04 -0700 |
| Commit: 008a60d, github.com/apache/spark/pull/5969 |
| |
| [SPARK-7474] [MLLIB] update ParamGridBuilder doctest |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-08 11:16:04 -0700 |
| Commit: 65afd3c, github.com/apache/spark/pull/6001 |
| |
| [SPARK-7383] [ML] Feature Parity in PySpark for ml.features |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-08 11:14:39 -0700 |
| Commit: f5ff4a8, github.com/apache/spark/pull/5991 |
| |
| [SPARK-3454] separate json endpoints for data in the UI |
| Imran Rashid <irashid@cloudera.com> |
| 2015-05-08 16:54:32 +0100 |
| Commit: c796be7, github.com/apache/spark/pull/5940 |
| |
| [SPARK-6869] [PYSPARK] Add pyspark archives path to PYTHONPATH |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-05-08 08:44:46 -0500 |
| Commit: ebff732, github.com/apache/spark/pull/5580 |
| |
| [SPARK-7392] [CORE] bugfix: Kryo buffer size cannot be larger than 2M |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-05-08 09:10:58 +0100 |
| Commit: c2f0821, github.com/apache/spark/pull/5934 |
| |
| [SPARK-7232] [SQL] Add a Substitution batch for spark sql analyzer |
| wangfei <wangfei1@huawei.com> |
| 2015-05-07 22:55:42 -0700 |
| Commit: f496bf3, github.com/apache/spark/pull/5776 |
| |
| [SPARK-7470] [SQL] Spark shell SQLContext crashes without hive |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 22:32:13 -0700 |
| Commit: 714db2e, github.com/apache/spark/pull/5997 |
| |
| [SPARK-6986] [SQL] Use Serializer2 in more cases. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-07 20:59:42 -0700 |
| Commit: 3af423c, github.com/apache/spark/pull/5849 |
| |
| [SPARK-7452] [MLLIB] fix bug in topBykey and update test |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-05-07 20:55:08 -0700 |
| Commit: 92f8f80, github.com/apache/spark/pull/5990 |
| |
| [SPARK-6908] [SQL] Use isolated Hive client |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-07 19:36:24 -0700 |
| Commit: cd1d411, github.com/apache/spark/pull/5876 |
| |
| [SPARK-7305] [STREAMING] [WEBUI] Make BatchPage show friendly information when jobs are dropped by SparkListener |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-07 17:34:44 -0700 |
| Commit: 22ab70e, github.com/apache/spark/pull/5840 |
| |
| [SPARK-7450] Use UNSAFE.getLong() to speed up BitSetMethods#anySet() |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-07 16:53:59 -0700 |
| Commit: 88063c6, github.com/apache/spark/pull/5897 |
| |
| [SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for "CASE a WHEN b THEN c * END" |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-07 16:26:49 -0700 |
| Commit: 35f0173, github.com/apache/spark/pull/5979 |
| |
| [SPARK-5281] [SQL] Registering table on RDD is giving MissingRequirementError |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-05-07 16:24:11 -0700 |
| Commit: 937ba79, github.com/apache/spark/pull/5981 |
| |
| [SPARK-7277] [SQL] Throw exception if the property mapred.reduce.tasks is set to -1 |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-07 16:22:45 -0700 |
| Commit: ea3077f, github.com/apache/spark/pull/5811 |
| |
| [SQL] [MINOR] make star and multialias extend NamedExpression |
| scwf <wangfei1@huawei.com> |
| 2015-05-07 16:21:24 -0700 |
| Commit: 97d1182, github.com/apache/spark/pull/5928 |
| |
| [SPARK-6948] [MLLIB] compress vectors in VectorAssembler |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-07 15:45:37 -0700 |
| Commit: e43803b, github.com/apache/spark/pull/5985 |
| |
| [SPARK-5726] [MLLIB] Elementwise (Hadamard) Vector Product Transformer |
| Octavian Geagla <ogeagla@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 14:49:55 -0700 |
| Commit: 658a478, github.com/apache/spark/pull/4580 |
| |
| [SPARK-7328] [MLLIB] [PYSPARK] Pyspark.mllib.linalg.Vectors: Missing items |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-07 14:02:05 -0700 |
| Commit: 347a329, github.com/apache/spark/pull/5872 |
| |
| [SPARK-7347] DAG visualization: add tooltips to RDDs |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 12:29:56 -0700 |
| Commit: 88717ee, github.com/apache/spark/pull/5957 |
| |
| [SPARK-7391] DAG visualization: auto expand if linked from another viz |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 12:29:18 -0700 |
| Commit: f121651, github.com/apache/spark/pull/5958 |
| |
| [SPARK-7373] [MESOS] Add docker support for launching drivers in mesos cluster mode. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-05-07 12:23:16 -0700 |
| Commit: 4eecf55, github.com/apache/spark/pull/5917 |
| |
| [SPARK-7399] [SPARK CORE] Fixed compilation error in scala 2.11 |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-05-07 12:21:09 -0700 |
| Commit: 0c33bf8, github.com/apache/spark/pull/5966 |
| |
| [SPARK-5213] [SQL] Remove the duplicated SparkSQLParser |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-07 12:09:54 -0700 |
| Commit: 074d75d, github.com/apache/spark/pull/5965 |
| |
| [SPARK-7116] [SQL] [PYSPARK] Remove cache() causing memory leak |
| ksonj <kson@siberie.de> |
| 2015-05-07 12:04:19 -0700 |
| Commit: dec8f53, github.com/apache/spark/pull/5973 |
| |
| [SPARK-1442] [SQL] [FOLLOW-UP] Address minor comments in Window Function PR (#5604). |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-07 11:46:49 -0700 |
| Commit: 5784c8d, github.com/apache/spark/pull/5945 |
| |
| [SPARK-6093] [MLLIB] Add RegressionMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-07 11:18:32 -0700 |
| Commit: 1712a7c, github.com/apache/spark/pull/5941 |
| |
| [SPARK-7118] [Python] Add the coalesce Spark SQL function available in PySpark |
| Olivier Girardot <o.girardot@lateral-thoughts.com> |
| 2015-05-07 10:58:35 -0700 |
| Commit: 068c315, github.com/apache/spark/pull/5698 |
| |
| [SPARK-7388] [SPARK-7383] wrapper for VectorAssembler in Python |
| Burak Yavuz <brkyvz@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-05-07 10:25:41 -0700 |
| Commit: 9e2ffb1, github.com/apache/spark/pull/5930 |
| |
| [SPARK-7330] [SQL] avoid NPE at jdbc rdd |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-07 10:05:01 -0700 |
| Commit: ed9be06, github.com/apache/spark/pull/5877 |
| |
| [SPARK-7429] [ML] Params cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 01:28:44 -0700 |
| Commit: 4f87e95, github.com/apache/spark/pull/5960 |
| |
| [SPARK-7421] [MLLIB] OnlineLDA cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 01:12:14 -0700 |
| Commit: 8b6b46e, github.com/apache/spark/pull/5956 |
| |
| [SPARK-7035] Encourage __getitem__ over __getattr__ on column access in the Python DataFrame API |
| ksonj <kson@siberie.de> |
| 2015-05-07 01:02:00 -0700 |
| Commit: fae4e2d, github.com/apache/spark/pull/5971 |
| |
| [SPARK-7295][SQL] bitwise operations for DataFrame DSL |
| Shiti <ssaxena.ece@gmail.com> |
| 2015-05-07 01:00:29 -0700 |
| Commit: fa8fddf, github.com/apache/spark/pull/5867 |
| |
| [SPARK-7217] [STREAMING] Add configuration to control the default behavior of StreamingContext.stop() implicitly calling SparkContext.stop() |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-07 00:24:44 -0700 |
| Commit: 01187f5, github.com/apache/spark/pull/5929 |
| |
| [SPARK-7430] [STREAMING] [TEST] General improvements to streaming tests to increase debuggability |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-07 00:21:10 -0700 |
| Commit: cfdadcb, github.com/apache/spark/pull/5961 |
| |
| [SPARK-5938] [SPARK-5443] [SQL] Improve JsonRDD performance |
| Nathan Howell <nhowell@godaddy.com> |
| 2015-05-06 22:56:53 -0700 |
| Commit: 2d6612c, github.com/apache/spark/pull/5801 |
| |
| [SPARK-6812] [SPARKR] filter() on DataFrame does not work as expected. |
| Sun Rui <rui.sun@intel.com> |
| 2015-05-06 22:48:16 -0700 |
| Commit: 9cfa9a5, github.com/apache/spark/pull/5938 |
| |
| [SPARK-7432] [MLLIB] disable cv doctest |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-06 22:29:07 -0700 |
| Commit: 773aa25, github.com/apache/spark/pull/5962 |
| |
| [SPARK-7405] [STREAMING] Fix the bug that ReceiverInputDStream doesn't report InputInfo |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-06 18:07:00 -0700 |
| Commit: 14502d5, github.com/apache/spark/pull/5950 |
| |
| [HOT FIX] For DAG visualization #5954 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 18:02:08 -0700 |
| Commit: 71a452b |
| |
| [SPARK-7371] [SPARK-7377] [SPARK-7408] DAG visualization addendum (#5729) |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 17:52:34 -0700 |
| Commit: 8fa6829, github.com/apache/spark/pull/5954 |
| |
| [SPARK-7396] [STREAMING] [EXAMPLE] Update KafkaWordCountProducer to use new Producer API |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-06 17:44:43 -0700 |
| Commit: 316a5c0, github.com/apache/spark/pull/5936 |
| |
| [SPARK-6799] [SPARKR] Remove SparkR RDD examples, add dataframe examples |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-06 17:28:11 -0700 |
| Commit: 4e93042, github.com/apache/spark/pull/5949 |
| |
| [HOT FIX] [SPARK-7418] Ignore flaky SparkSubmitUtilsSuite test |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 17:08:39 -0700 |
| Commit: fbf1f34 |
| |
| [SPARK-5995] [ML] Make Prediction dev API public |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-06 16:15:51 -0700 |
| Commit: 1ad04da, github.com/apache/spark/pull/5913 |
| |
| [HOT-FIX] Move HiveWindowFunctionQuerySuite.scala to hive compatibility dir. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-06 14:48:25 -0700 |
| Commit: 7740996, github.com/apache/spark/pull/5951 |
| |
| Add `Private` annotation. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-06 11:03:17 -0700 |
| Commit: 845d1d4 |
| |
| [SPARK-7311] Introduce internal Serializer API for determining if serializers support object relocation |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-06 10:52:55 -0700 |
| Commit: 002c123, github.com/apache/spark/pull/5924 |
| |
| [SPARK-1442] [SQL] Window Function Support for Spark SQL |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-06 10:43:00 -0700 |
| Commit: f2c4708, github.com/apache/spark/pull/5604 |
| |
| [SPARK-6201] [SQL] promote string and do widen types for IN |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-06 10:30:42 -0700 |
| Commit: c3eb441, github.com/apache/spark/pull/4945 |
| |
| [SPARK-5456] [SQL] fix decimal compare for jdbc rdd |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-06 10:05:10 -0700 |
| Commit: 150f671, github.com/apache/spark/pull/5803 |
| |
| [SQL] JavaDoc update for various DataFrame functions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-06 08:50:56 -0700 |
| Commit: 322e7e7, github.com/apache/spark/pull/5935 |
| |
| [SPARK-6940] [MLLIB] Add CrossValidator to Python ML pipeline API |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-06 01:28:43 -0700 |
| Commit: 32cdc81, github.com/apache/spark/pull/5926 |
| |
| [SPARK-7384][Core][Tests] Fix flaky tests for distributed mode in BroadcastSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 23:25:28 -0700 |
| Commit: 9f019c7, github.com/apache/spark/pull/5925 |
| |
| [SPARK-6267] [MLLIB] Python API for IsotonicRegression |
| Yanbo Liang <ybliang8@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-05-05 22:57:13 -0700 |
| Commit: 7b14578, github.com/apache/spark/pull/5890 |
| |
| [SPARK-7358][SQL] Move DataFrame mathfunctions into functions |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-05 22:56:01 -0700 |
| Commit: ba2b566, github.com/apache/spark/pull/5923 |
| |
| [SPARK-6841] [SPARKR] add support for mean, median, stdev etc. |
| qhuang <qian.huang@intel.com> |
| 2015-05-05 20:39:56 -0700 |
| Commit: a466944, github.com/apache/spark/pull/5446 |
| |
| Revert "[SPARK-3454] separate json endpoints for data in the UI" |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-05 19:27:30 -0700 |
| Commit: 51b3d41 |
| |
| [SPARK-6231][SQL/DF] Automatically resolve join condition ambiguity for self-joins. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-05 18:59:46 -0700 |
| Commit: 1fd31ba, github.com/apache/spark/pull/5919 |
| |
| Some minor cleanup after SPARK-4550. |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 18:32:16 -0700 |
| Commit: 0092abb, github.com/apache/spark/pull/5916 |
| |
| [SPARK-7230] [SPARKR] Make RDD private in SparkR. |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-05 14:40:33 -0700 |
| Commit: c688e3c, github.com/apache/spark/pull/5895 |
| |
| [SQL][Minor] make StringComparison extends ExpectsInputTypes |
| wangfei <wangfei1@huawei.com> |
| 2015-05-05 14:24:37 -0700 |
| Commit: 3059291, github.com/apache/spark/pull/5905 |
| |
| [SPARK-7351] [STREAMING] [DOCS] Add spark.streaming.ui.retainedBatches to docs |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 13:42:23 -0700 |
| Commit: fec7b29, github.com/apache/spark/pull/5899 |
| |
| [SPARK-7294][SQL] ADD BETWEEN |
| äŗ峤 <chensong.cs@alibaba-inc.com>, kaka1992 <kaka_1992@163.com> |
| 2015-05-05 13:23:53 -0700 |
| Commit: 735bc3d, github.com/apache/spark/pull/5839 |
| |
| [SPARK-6939] [STREAMING] [WEBUI] Add timeline and histogram graphs for streaming statistics |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 12:52:16 -0700 |
| Commit: 489700c, github.com/apache/spark/pull/5533 |
| |
| [SPARK-5888] [MLLIB] Add OneHotEncoder as a Transformer |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 12:34:02 -0700 |
| Commit: 47728db, github.com/apache/spark/pull/5500 |
| |
| [SPARK-7333] [MLLIB] Add BinaryClassificationEvaluator to PySpark |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-05 11:45:37 -0700 |
| Commit: ee374e8, github.com/apache/spark/pull/5885 |
| |
| [SPARK-7243][SQL] Reduce size for Contingency Tables in DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-05 11:01:25 -0700 |
| Commit: 18340d7, github.com/apache/spark/pull/5900 |
| |
| [SPARK-7007] [CORE] Add a metric source for ExecutorAllocationManager |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-05 09:43:49 -0700 |
| Commit: 9f1f9b1, github.com/apache/spark/pull/5589 |
| |
| [SPARK-7318] [STREAMING] DStream cleans objects that are not closures |
| Andrew Or <andrew@databricks.com> |
| 2015-05-05 09:37:49 -0700 |
| Commit: 57e9f29, github.com/apache/spark/pull/5860 |
| |
| [SPARK-7237] Many user provided closures are not actually cleaned |
| Andrew Or <andrew@databricks.com> |
| 2015-05-05 09:37:04 -0700 |
| Commit: 1fdabf8, github.com/apache/spark/pull/5787 |
| |
| [MLLIB] [TREE] Verify size of input rdd > 0 when building meta data |
| Alain <aihe@usc.edu>, aihe@usc.edu <aihe@usc.edu> |
| 2015-05-05 16:47:34 +0100 |
| Commit: d4cb38a, github.com/apache/spark/pull/5810 |
| |
| Closes #5591 Closes #5878 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-05 08:00:31 -0700 |
| Commit: 9d250e6 |
| |
| [SPARK-6612] [MLLIB] [PYSPARK] Python KMeans parity |
| Hrishikesh Subramonian <hrishikesh.subramonian@flytxt.com> |
| 2015-05-05 07:57:39 -0700 |
| Commit: 5995ada, github.com/apache/spark/pull/5647 |
| |
| [SPARK-7202] [MLLIB] [PYSPARK] Add SparseMatrixPickler to SerDe |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-05 07:53:11 -0700 |
| Commit: 5ab652c, github.com/apache/spark/pull/5775 |
| |
| [SPARK-7350] [STREAMING] [WEBUI] Attach the Streaming tab when calling ssc.start() |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 15:09:58 +0100 |
| Commit: c6d1efb, github.com/apache/spark/pull/5898 |
| |
| [SPARK-5074] [CORE] [TESTS] Fix the flakey test 'run shuffle with map stage failure' in DAGSchedulerSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 15:04:14 +0100 |
| Commit: 5ffc73e, github.com/apache/spark/pull/5903 |
| |
| [MINOR] Minor update for document |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-05 14:44:02 +0100 |
| Commit: b83091a, github.com/apache/spark/pull/5906 |
| |
| [SPARK-3454] separate json endpoints for data in the UI |
| Imran Rashid <irashid@cloudera.com> |
| 2015-05-05 07:25:40 -0500 |
| Commit: d497358, github.com/apache/spark/pull/4435 |
| |
| [SPARK-7357] Improving HBaseTest example |
| Jihong MA <linlin200605@gmail.com> |
| 2015-05-05 12:40:41 +0100 |
| Commit: 51f4620, github.com/apache/spark/pull/5904 |
| |
| [SPARK-5112] Expose SizeEstimator as a developer api |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 12:38:46 +0100 |
| Commit: 4222da6, github.com/apache/spark/pull/3913 |
| |
| [SPARK-6653] [YARN] New config to specify port for sparkYarnAM actor system |
| shekhar.bansal <shekhar.bansal@guavus.com> |
| 2015-05-05 11:09:51 +0100 |
| Commit: fc8feaa, github.com/apache/spark/pull/5719 |
| |
| [SPARK-7341] [STREAMING] [TESTS] Fix the flaky test: org.apache.spark.stre... |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 02:15:39 -0700 |
| Commit: 4d29867, github.com/apache/spark/pull/5891 |
| |
| [SPARK-7113] [STREAMING] Support input information reporting for Direct Kafka stream |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-05 02:01:06 -0700 |
| Commit: 8436f7e, github.com/apache/spark/pull/5879 |
| |
| [HOTFIX] [TEST] Ignoring flaky tests |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-05 01:58:51 -0700 |
| Commit: 8776fe0, github.com/apache/spark/pull/5901 |
| |
| [SPARK-7139] [STREAMING] Allow received block metadata to be saved to WAL and recovered on driver failure |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-05 01:45:19 -0700 |
| Commit: 1854ac3, github.com/apache/spark/pull/5732 |
| |
| [MINOR] [BUILD] Declare ivy dependency in root pom. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-05 08:56:16 +0100 |
| Commit: c5790a2, github.com/apache/spark/pull/5893 |
| |
| [MINOR] Renamed variables in SparkKMeans.scala, LocalKMeans.scala and kmeans.py to simplify readability |
| Niccolo Becchi <niccolo.becchi@gmail.com>, pippobaudos <niccolo.becchi@gmail.com> |
| 2015-05-05 08:54:42 +0100 |
| Commit: da738cf, github.com/apache/spark/pull/5875 |
| |
| [SPARK-7314] [SPARK-3524] [PYSPARK] upgrade Pyrolite to 4.4 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-04 23:52:42 -0700 |
| Commit: e9b16e6, github.com/apache/spark/pull/5850 |
| |
| [SPARK-7236] [CORE] Fix to prevent AkkaUtils askWithReply from sleeping on final attempt |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-05-04 18:29:22 -0700 |
| Commit: 8aa5aea, github.com/apache/spark/pull/5896 |
| |
| [SPARK-7266] Add ExpectsInputTypes to expressions when possible. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-04 18:03:07 -0700 |
| Commit: 678c4da, github.com/apache/spark/pull/5796 |
| |
| [SPARK-7243][SQL] Contingency Tables for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-04 17:02:49 -0700 |
| Commit: 8055411, github.com/apache/spark/pull/5842 |
| |
| [SPARK-6943] [SPARK-6944] DAG visualization on SparkUI |
| Andrew Or <andrew@databricks.com> |
| 2015-05-04 16:21:36 -0700 |
| Commit: fc8b581, github.com/apache/spark/pull/5729 |
| |
| [SPARK-7319][SQL] Improve the output from DataFrame.show() |
| äŗ峤 <chensong.cs@alibaba-inc.com> |
| 2015-05-04 12:08:38 -0700 |
| Commit: f32e69e, github.com/apache/spark/pull/5865 |
| |
| [SPARK-5956] [MLLIB] Pipeline components should be copyable. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-04 11:28:59 -0700 |
| Commit: e0833c5, github.com/apache/spark/pull/5820 |
| |
| [MINOR] Fix python test typo? |
| Andrew Or <andrew@databricks.com> |
| 2015-05-04 17:17:55 +0100 |
| Commit: 5a1a107, github.com/apache/spark/pull/5883 |
| |
| |
| Release 1.4.0 |
| |
| [HOTFIX] Revert "[SPARK-7092] Update spark scala version to 2.11.6" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 02:28:41 -0700 |
| Commit: 31f5d53 |
| |
| Revert "Preparing Spark release v1.4.0-rc1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 02:27:14 -0700 |
| Commit: 586ede6 |
| |
| Revert "Preparing development version 1.4.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 02:27:07 -0700 |
| Commit: e7309ec |
| |
| Fixing a few basic typos in the Programming Guide. |
| Mike Dusenberry <dusenberrymw@gmail.com> |
| 2015-05-19 08:59:45 +0100 |
| Commit: 0748263, github.com/apache/spark/pull/6240 |
| |
| Preparing development version 1.4.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 07:13:24 +0000 |
| Commit: a1d896b |
| |
| Preparing Spark release v1.4.0-rc1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 07:13:24 +0000 |
| Commit: 79fb01a |
| |
| Updating CHANGES.txt for Spark 1.4 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 00:12:20 -0700 |
| Commit: 30bf333 |
| |
| Revert "Preparing Spark release v1.4.0-rc1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 00:10:39 -0700 |
| Commit: b0c63d2 |
| |
| Revert "Preparing development version 1.4.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 00:10:37 -0700 |
| Commit: 198a186 |
| |
| [SPARK-7581] [ML] [DOC] User guide for spark.ml PolynomialExpansion |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-05-19 00:06:33 -0700 |
| Commit: 38a3fc8, github.com/apache/spark/pull/6113 |
| |
| [HOTFIX] Fixing style failures in Kinesis source |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 00:02:06 -0700 |
| Commit: de60c2e |
| |
| Preparing development version 1.4.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 06:06:41 +0000 |
| Commit: 40190ce |
| |
| Preparing Spark release v1.4.0-rc1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 06:06:40 +0000 |
| Commit: 38ccef3 |
| |
| Revert "Preparing Spark release v1.4.0-rc1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-18 23:06:15 -0700 |
| Commit: 152b029 |
| |
| Revert "Preparing development version 1.4.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-18 23:06:13 -0700 |
| Commit: 4d098bc |
| |
| [HOTFIX]: Java 6 Build Breaks |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 06:00:13 +0000 |
| Commit: be1fc93 |
| |
| Preparing development version 1.4.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 05:01:11 +0000 |
| Commit: 758ca74 |
| |
| Preparing Spark release v1.4.0-rc1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-19 05:01:11 +0000 |
| Commit: e8e97e3 |
| |
| [SPARK-7687] [SQL] DataFrame.describe() should cast all aggregates to String |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-18 21:53:44 -0700 |
| Commit: 99436bd, github.com/apache/spark/pull/6218 |
| |
| CHANGES.txt and changelist updaets for Spark 1.4. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-18 21:44:13 -0700 |
| Commit: 914ecd0 |
| |
| [SPARK-7150] SparkContext.range() and SQLContext.range() |
| Daoyuan Wang <daoyuan.wang@intel.com>, Davies Liu <davies@databricks.com> |
| 2015-05-18 21:43:12 -0700 |
| Commit: 7fcbb2c, github.com/apache/spark/pull/6081 |
| |
| Version updates for Spark 1.4.0 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-18 21:38:37 -0700 |
| Commit: 9d0b7fb |
| |
| [SPARK-7681] [MLLIB] Add SparseVector support for gemv |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-18 21:32:36 -0700 |
| Commit: dd9f873, github.com/apache/spark/pull/6209 |
| |
| [SPARK-7692] Updated Kinesis examples |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-18 18:24:15 -0700 |
| Commit: 9c48548, github.com/apache/spark/pull/6249 |
| |
| [SPARK-7621] [STREAMING] Report Kafka errors to StreamingListeners |
| jerluc <jeremyalucas@gmail.com> |
| 2015-05-18 18:13:29 -0700 |
| Commit: 9188ad8, github.com/apache/spark/pull/6204 |
| |
| [SPARK-7624] Revert #4147 |
| Davies Liu <davies@databricks.com> |
| 2015-05-18 16:55:45 -0700 |
| Commit: 60cb33d, github.com/apache/spark/pull/6172 |
| |
| [SQL] Fix serializability of ORC table scan |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-18 15:24:31 -0700 |
| Commit: f8f23c4, github.com/apache/spark/pull/6247 |
| |
| [SPARK-7501] [STREAMING] DAG visualization: show DStream operations |
| Andrew Or <andrew@databricks.com> |
| 2015-05-18 14:33:33 -0700 |
| Commit: a475cbc, github.com/apache/spark/pull/6034 |
| |
| [HOTFIX] Fix ORC build break |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-18 14:04:04 -0700 |
| Commit: ba502ab, github.com/apache/spark/pull/6244 |
| |
| [SPARK-7658] [STREAMING] [WEBUI] Update the mouse behaviors for the timeline graphs |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-18 13:34:43 -0700 |
| Commit: 39add3d, github.com/apache/spark/pull/6168 |
| |
| [SPARK-6216] [PYSPARK] check python version of worker with driver |
| Davies Liu <davies@databricks.com> |
| 2015-05-18 12:55:13 -0700 |
| Commit: a833209, github.com/apache/spark/pull/6203 |
| |
| [SPARK-7673] [SQL] WIP: HadoopFsRelation and ParquetRelation2 performance optimizations |
| Cheng Lian <lian@databricks.com> |
| 2015-05-18 12:45:37 -0700 |
| Commit: 3962348, github.com/apache/spark/pull/6225 |
| |
| [SPARK-7567] [SQL] [follow-up] Use a new flag to set output committer based on mapreduce apis |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-18 12:17:10 -0700 |
| Commit: a385f4b, github.com/apache/spark/pull/6130 |
| |
| [SPARK-7269] [SQL] Incorrect analysis for aggregation(use semanticEquals) |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-18 12:08:28 -0700 |
| Commit: d6f5f37, github.com/apache/spark/pull/6173 |
| |
| [SPARK-7631] [SQL] treenode argString should not print children |
| scwf <wangfei1@huawei.com> |
| 2015-05-18 12:05:14 -0700 |
| Commit: dbd4ec8, github.com/apache/spark/pull/6144 |
| |
| [SPARK-2883] [SQL] ORC data source for Spark SQL |
| Zhan Zhang <zhazhan@gmail.com>, Cheng Lian <lian@databricks.com> |
| 2015-05-18 12:03:27 -0700 |
| Commit: 65d71bd, github.com/apache/spark/pull/6194 |
| |
| [SPARK-7380] [MLLIB] pipeline stages should be copyable in Python |
| Xiangrui Meng <meng@databricks.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-18 12:02:18 -0700 |
| Commit: cf4e04a, github.com/apache/spark/pull/6088 |
| |
| [SQL] [MINOR] [THIS] use private for internal field in ScalaUdf |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-18 12:01:30 -0700 |
| Commit: 7d44c01, github.com/apache/spark/pull/6235 |
| |
| [SPARK-7570] [SQL] Ignores _temporary during partition discovery |
| Cheng Lian <lian@databricks.com> |
| 2015-05-18 11:59:44 -0700 |
| Commit: c7623a2, github.com/apache/spark/pull/6091 |
| |
| [SPARK-6888] [SQL] Make the jdbc driver handling user-definable |
| Rene Treffer <treffer@measite.de> |
| 2015-05-18 11:55:36 -0700 |
| Commit: b41301a, github.com/apache/spark/pull/5555 |
| |
| [SPARK-7627] [SPARK-7472] DAG visualization: style skipped stages |
| Andrew Or <andrew@databricks.com> |
| 2015-05-18 10:59:35 -0700 |
| Commit: a0ae8ce, github.com/apache/spark/pull/6171 |
| |
| [SPARK-7272] [MLLIB] User guide for PMML model export |
| Vincenzo Selvaggio <vselvaggio@hotmail.it> |
| 2015-05-18 08:46:33 -0700 |
| Commit: a95d4e1, github.com/apache/spark/pull/6219 |
| |
| [SPARK-6657] [PYSPARK] Fix doc warnings |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-18 08:35:14 -0700 |
| Commit: 2c94ffe, github.com/apache/spark/pull/6221 |
| |
| [SPARK-7299][SQL] Set precision and scale for Decimal according to JDBC metadata instead of returned BigDecimal |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-18 01:10:55 -0700 |
| Commit: 0e7cd8f, github.com/apache/spark/pull/5833 |
| |
| [SPARK-7694] [MLLIB] Use getOrElse for getting the threshold of LR model |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-05-17 21:16:52 -0700 |
| Commit: 0b6bc8a, github.com/apache/spark/pull/6224 |
| |
| [SPARK-7693][Core] Remove "import scala.concurrent.ExecutionContext.Implicits.global" |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-17 20:37:19 -0700 |
| Commit: 2a42d2d, github.com/apache/spark/pull/6223 |
| |
| [SQL] [MINOR] use catalyst type converter in ScalaUdf |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-17 16:51:57 -0700 |
| Commit: be66d19, github.com/apache/spark/pull/6182 |
| |
| [SPARK-6514] [SPARK-5960] [SPARK-6656] [SPARK-7679] [STREAMING] [KINESIS] Updates to the Kinesis API |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-17 16:49:07 -0700 |
| Commit: e0632ff, github.com/apache/spark/pull/6147 |
| |
| [SPARK-7491] [SQL] Allow configuration of classloader isolation for hive |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-17 12:43:15 -0700 |
| Commit: a855608, github.com/apache/spark/pull/6167 |
| |
| [SPARK-7686] [SQL] DescribeCommand is assigned wrong output attributes in SparkStrategies |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-17 11:59:28 -0700 |
| Commit: 53d6ab5, github.com/apache/spark/pull/6217 |
| |
| [SPARK-7660] Wrap SnappyOutputStream to work around snappy-java bug |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-17 09:30:49 -0700 |
| Commit: 6df71eb, github.com/apache/spark/pull/6176 |
| |
| [SPARK-7669] Builds against Hadoop 2.6+ get inconsistent curator dependā¦ |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-05-17 17:03:11 +0100 |
| Commit: 0feb3de, github.com/apache/spark/pull/6191 |
| |
| [SPARK-7447] [SQL] Don't re-merge Parquet schema when the relation is deserialized |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-17 15:42:21 +0800 |
| Commit: 898be62, github.com/apache/spark/pull/6012 |
| |
| [MINOR] Add 1.3, 1.3.1 to master branch EC2 scripts |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-17 00:12:20 -0700 |
| Commit: 0ed376a, github.com/apache/spark/pull/6215 |
| |
| [MINOR] [SQL] Removes an unreachable case clause |
| Cheng Lian <lian@databricks.com> |
| 2015-05-16 23:20:09 -0700 |
| Commit: 671a6bc, github.com/apache/spark/pull/6214 |
| |
| [SPARK-7654][SQL] Move JDBC into DataFrame's reader/writer interface. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-16 22:01:53 -0700 |
| Commit: 17e0786, github.com/apache/spark/pull/6210 |
| |
| [SPARK-7655][Core] Deserializing value should not hold the TaskSchedulerImpl lock |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-16 21:03:22 -0700 |
| Commit: 8494910, github.com/apache/spark/pull/6195 |
| |
| [SPARK-7654][MLlib] Migrate MLlib to the DataFrame reader/writer API. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-16 15:03:57 -0700 |
| Commit: bd057f8, github.com/apache/spark/pull/6211 |
| |
| [BUILD] update jblas dependency version to 1.2.4 |
| Matthew Brandyberry <mbrandy@us.ibm.com> |
| 2015-05-16 18:17:48 +0100 |
| Commit: 8bde352, github.com/apache/spark/pull/6199 |
| |
| [HOTFIX] [SQL] Fixes DataFrameWriter.mode(String) |
| Cheng Lian <lian@databricks.com> |
| 2015-05-16 20:55:10 +0800 |
| Commit: 856619d, github.com/apache/spark/pull/6212 |
| |
| [SPARK-7655][Core][SQL] Remove 'scala.concurrent.ExecutionContext.Implicits.global' in 'ask' and 'BroadcastHashJoin' |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-16 00:44:29 -0700 |
| Commit: ad5b0b1, github.com/apache/spark/pull/6200 |
| |
| [SPARK-7672] [CORE] Use int conversion in translating kryoserializer.buffer.mb to kryoserializer.buffer |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-05-16 08:24:21 +0100 |
| Commit: e7607e5, github.com/apache/spark/pull/6198 |
| |
| [SPARK-4556] [BUILD] binary distribution assembly can't run in local mode |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-16 08:18:41 +0100 |
| Commit: 1fc3560, github.com/apache/spark/pull/6186 |
| |
| [SPARK-7671] Fix wrong URLs in MLlib Data Types Documentation |
| FavioVazquez <favio.vazquezp@gmail.com> |
| 2015-05-16 08:07:03 +0100 |
| Commit: 7e3f9fe, github.com/apache/spark/pull/6196 |
| |
| [SPARK-7654][SQL] DataFrameReader and DataFrameWriter for input/output API |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-15 22:00:31 -0700 |
| Commit: 9da55b5, github.com/apache/spark/pull/6175 |
| |
| [SPARK-7473] [MLLIB] Add reservoir sample in RandomForest |
| AiHe <ai.he@ussuning.com> |
| 2015-05-15 20:42:35 -0700 |
| Commit: f41be8f, github.com/apache/spark/pull/5988 |
| |
| [SPARK-7543] [SQL] [PySpark] split dataframe.py into multiple files |
| Davies Liu <davies@databricks.com> |
| 2015-05-15 20:09:15 -0700 |
| Commit: 8164fbc, github.com/apache/spark/pull/6201 |
| |
| [SPARK-7073] [SQL] [PySpark] Clean up SQL data type hierarchy in Python |
| Davies Liu <davies@databricks.com> |
| 2015-05-15 20:05:26 -0700 |
| Commit: 61806f6, github.com/apache/spark/pull/6206 |
| |
| [SPARK-7575] [ML] [DOC] Example code for OneVsRest |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-15 19:33:20 -0700 |
| Commit: 04323ba, github.com/apache/spark/pull/6115 |
| |
| [SPARK-7563] OutputCommitCoordinator.stop() should only run on the driver |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-15 18:06:01 -0700 |
| Commit: ed75cc0, github.com/apache/spark/pull/6197 |
| |
| [SPARK-7676] Bug fix and cleanup of stage timeline view |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-15 17:45:14 -0700 |
| Commit: 6f78d03, github.com/apache/spark/pull/6202 |
| |
| [SPARK-7556] [ML] [DOC] Add user guide for spark.ml Binarizer, including Scala, Java and Python examples |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-15 15:05:04 -0700 |
| Commit: e847d86, github.com/apache/spark/pull/6116 |
| |
| [SPARK-7677] [STREAMING] Add Kafka modules to the 2.11 build. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-05-15 14:57:29 -0700 |
| Commit: 31e6404, github.com/apache/spark/pull/6149 |
| |
| [SPARK-7226] [SPARKR] Support math functions in R DataFrame |
| qhuang <qian.huang@intel.com> |
| 2015-05-15 14:06:16 -0700 |
| Commit: 9ef6d74, github.com/apache/spark/pull/6170 |
| |
| [SPARK-7296] Add timeline visualization for stages in the UI. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 13:54:09 -0700 |
| Commit: a5f7b3b, github.com/apache/spark/pull/5843 |
| |
| [SPARK-7504] [YARN] NullPointerException when initializing SparkContext in YARN-cluster mode |
| ehnalis <zoltan.zvara@gmail.com> |
| 2015-05-15 12:14:02 -0700 |
| Commit: 7dc0ff3, github.com/apache/spark/pull/6083 |
| |
| [SPARK-7664] [WEBUI] DAG visualization: Fix incorrect link paths of DAG. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 11:54:13 -0700 |
| Commit: e319719, github.com/apache/spark/pull/6184 |
| |
| [SPARK-5412] [DEPLOY] Cannot bind Master to a specific hostname as per the documentation |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-15 11:30:19 -0700 |
| Commit: fe3c734, github.com/apache/spark/pull/6185 |
| |
| [CORE] Protect additional test vars from early GC |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-15 11:27:24 -0700 |
| Commit: 866e4b5, github.com/apache/spark/pull/6187 |
| |
| [SPARK-7233] [CORE] Detect REPL mode once |
| Oleksii Kostyliev <etander@gmail.com>, Oleksii Kostyliev <okostyliev@thunderhead.com> |
| 2015-05-15 11:19:56 -0700 |
| Commit: c58b9c6, github.com/apache/spark/pull/5835 |
| |
| [SPARK-7651] [MLLIB] [PYSPARK] GMM predict, predictSoft should raise error on bad input |
| FlytxtRnD <meethu.mathew@flytxt.com> |
| 2015-05-15 10:43:18 -0700 |
| Commit: dfdae58, github.com/apache/spark/pull/6180 |
| |
| [SPARK-7668] [MLLIB] Preserve isTransposed property for Matrix after calling map function |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-15 10:03:29 -0700 |
| Commit: d1f5651, github.com/apache/spark/pull/6188 |
| |
| [SPARK-7503] [YARN] Resources in .sparkStaging directory can't be cleaned up on error |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-15 11:37:34 +0100 |
| Commit: a17a0ee, github.com/apache/spark/pull/6026 |
| |
| [SPARK-7591] [SQL] Partitioning support API tweaks |
| Cheng Lian <lian@databricks.com> |
| 2015-05-15 16:20:49 +0800 |
| Commit: bcb2c5d, github.com/apache/spark/pull/6150 |
| |
| [SPARK-6258] [MLLIB] GaussianMixture Python API parity check |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-15 00:18:39 -0700 |
| Commit: c0bb974, github.com/apache/spark/pull/6087 |
| |
| [SPARK-7650] [STREAMING] [WEBUI] Move streaming css and js files to the streaming project |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 23:51:41 -0700 |
| Commit: 0ba99f0, github.com/apache/spark/pull/6160 |
| |
| [CORE] Remove unreachable Heartbeat message from Worker |
| Kan Zhang <kzhang@apache.org> |
| 2015-05-14 23:50:50 -0700 |
| Commit: 6742b4e, github.com/apache/spark/pull/6163 |
| |
| [HOTFIX] Add workaround for SPARK-7660 to fix JavaAPISuite failures. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-14 23:17:41 -0700 |
| Commit: 1206a55 |
| |
| [SQL] When creating partitioned table scan, explicitly create UnionRDD. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-15 12:04:26 +0800 |
| Commit: 7aa269f, github.com/apache/spark/pull/6162 |
| |
| [SPARK-7098][SQL] Make the WHERE clause with timestamp show consistent result |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-14 20:49:21 -0700 |
| Commit: bac4522, github.com/apache/spark/pull/5682 |
| |
| [SPARK-7548] [SQL] Add explode function for DataFrames |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-14 19:49:44 -0700 |
| Commit: 778a054, github.com/apache/spark/pull/6107 |
| |
| [SPARK-7619] [PYTHON] fix docstring signature |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 18:16:22 -0700 |
| Commit: a238c23, github.com/apache/spark/pull/6161 |
| |
| [SPARK-7648] [MLLIB] Add weights and intercept to GLM wrappers in spark.ml |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 18:13:58 -0700 |
| Commit: f91bb57, github.com/apache/spark/pull/6156 |
| |
| [SPARK-7645] [STREAMING] [WEBUI] Show milliseconds in the UI if the batch interval < 1 second |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 16:58:36 -0700 |
| Commit: 79983f1, github.com/apache/spark/pull/6154 |
| |
| [SPARK-7649] [STREAMING] [WEBUI] Use window.localStorage to store the status rather than the url |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-14 16:57:33 -0700 |
| Commit: 3358485, github.com/apache/spark/pull/6158 |
| |
| [SPARK-7643] [UI] use the correct size in RDDPage for storage info and partitions |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 16:56:32 -0700 |
| Commit: 8d8876d, github.com/apache/spark/pull/6157 |
| |
| [SPARK-7598] [DEPLOY] Add aliveWorkers metrics in Master |
| Rex Xiong <pengx@microsoft.com> |
| 2015-05-14 16:55:31 -0700 |
| Commit: 894214f, github.com/apache/spark/pull/6117 |
| |
| Make SPARK prefix a variable |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-14 15:26:35 -0700 |
| Commit: fceaffc, github.com/apache/spark/pull/6153 |
| |
| [SPARK-7278] [PySpark] DateType should find datetime.datetime acceptable |
| ksonj <kson@siberie.de> |
| 2015-05-14 15:10:58 -0700 |
| Commit: a49a145, github.com/apache/spark/pull/6057 |
| |
| [SQL][minor] rename apply for QueryPlanner |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-14 10:25:18 -0700 |
| Commit: aa8a0f9, github.com/apache/spark/pull/6142 |
| |
| [SPARK-7249] Updated Hadoop dependencies due to inconsistency in the versions |
| FavioVazquez <favio.vazquezp@gmail.com> |
| 2015-05-14 15:22:58 +0100 |
| Commit: 67ed0aa, github.com/apache/spark/pull/5786 |
| |
| [SPARK-7568] [ML] ml.LogisticRegression doesn't output the right prediction |
| DB Tsai <dbt@netflix.com> |
| 2015-05-14 01:26:08 -0700 |
| Commit: 58534b0, github.com/apache/spark/pull/6109 |
| |
| [SPARK-7407] [MLLIB] use uid + name to identify parameters |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-14 01:22:15 -0700 |
| Commit: e45cd9f, github.com/apache/spark/pull/6019 |
| |
| [SPARK-7595] [SQL] Window will cause resolve failed with self join |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-14 00:23:27 -0700 |
| Commit: c80e0cf, github.com/apache/spark/pull/6114 |
| |
| [SPARK-7620] [ML] [MLLIB] Removed calling size, length in while condition to avoid extra JVM call |
| DB Tsai <dbt@netflix.com> |
| 2015-05-13 22:23:21 -0700 |
| Commit: 9ab4db2, github.com/apache/spark/pull/6137 |
| |
| [SPARK-7612] [MLLIB] update NB training to use mllib's BLAS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-13 21:27:17 -0700 |
| Commit: 82f387f, github.com/apache/spark/pull/6128 |
| |
| [HOT FIX #6125] Do not wait for all stages to start rendering |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 21:04:13 -0700 |
| Commit: 2d4a961, github.com/apache/spark/pull/6138 |
| |
| [HOTFIX] Use 'new Job' in fsBasedParquet.scala |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 17:58:29 -0700 |
| Commit: d518c03, github.com/apache/spark/pull/6136 |
| |
| [SPARK-6752] [STREAMING] [REVISED] Allow StreamingContext to be recreated from checkpoint and existing SparkContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-13 17:33:15 -0700 |
| Commit: aec8394, github.com/apache/spark/pull/6096 |
| |
| [SPARK-7601] [SQL] Support Insert into JDBC Datasource |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-05-13 17:24:04 -0700 |
| Commit: 820aaa6, github.com/apache/spark/pull/6121 |
| |
| [SPARK-7081] Faster sort-based shuffle path using binary processing cache-aware sort |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-13 17:07:31 -0700 |
| Commit: c53ebea, github.com/apache/spark/pull/5868 |
| |
| [SPARK-7356] [STREAMING] Fix flakey tests in FlumePollingStreamSuite using SparkSink's batch CountDownLatch. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-13 16:43:30 -0700 |
| Commit: 6c0644a, github.com/apache/spark/pull/5918 |
| |
| [STREAMING] [MINOR] Keep streaming.UIUtils private |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:31:24 -0700 |
| Commit: e499a1e, github.com/apache/spark/pull/6134 |
| |
| [SPARK-7502] DAG visualization: gracefully handle removed stages |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:29:52 -0700 |
| Commit: 895d46a, github.com/apache/spark/pull/6132 |
| |
| [SPARK-7464] DAG visualization: highlight the same RDDs on hover |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:29:10 -0700 |
| Commit: 4b4f10b, github.com/apache/spark/pull/6100 |
| |
| [SPARK-7399] Spark compilation error for scala 2.11 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:28:37 -0700 |
| Commit: e6b8cef, github.com/apache/spark/pull/6129 |
| |
| [SPARK-7608] Clean up old state in RDDOperationGraphListener |
| Andrew Or <andrew@databricks.com> |
| 2015-05-13 16:27:48 -0700 |
| Commit: ec34230, github.com/apache/spark/pull/6125 |
| |
| [SQL] Move some classes into packages that are more appropriate. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-13 16:15:31 -0700 |
| Commit: acd872b, github.com/apache/spark/pull/6108 |
| |
| [SPARK-7303] [SQL] push down project if possible when the child is sort |
| scwf <wangfei1@huawei.com> |
| 2015-05-13 16:13:48 -0700 |
| Commit: d5c52d9, github.com/apache/spark/pull/5838 |
| |
| [SPARK-7382] [MLLIB] Feature Parity in PySpark for ml.classification |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-13 15:13:09 -0700 |
| Commit: 51230f2, github.com/apache/spark/pull/6106 |
| |
| [SPARK-7545] [MLLIB] Added check in Bernoulli Naive Bayes to make sure that both training and predict features have values of 0 or 1 |
| leahmcguire <lmcguire@salesforce.com> |
| 2015-05-13 14:13:19 -0700 |
| Commit: d9fb905, github.com/apache/spark/pull/6073 |
| |
| [SPARK-7593] [ML] Python Api for ml.feature.Bucketizer |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-13 13:21:36 -0700 |
| Commit: 11911b0, github.com/apache/spark/pull/6124 |
| |
| [SPARK-7551][DataFrame] support backticks for DataFrame attribute resolution |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-13 12:47:48 -0700 |
| Commit: 3a60bcb, github.com/apache/spark/pull/6074 |
| |
| [SPARK-7567] [SQL] Migrating Parquet data source to FSBasedRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 11:04:10 -0700 |
| Commit: 90f304b, github.com/apache/spark/pull/6090 |
| |
| [SPARK-7589] [STREAMING] [WEBUI] Make "Input Rate" in the Streaming page consistent with other pages |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 10:01:26 -0700 |
| Commit: 10007fb, github.com/apache/spark/pull/6102 |
| |
| [SPARK-6734] [SQL] Add UDTF.close support in Generate |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-14 00:14:59 +0800 |
| Commit: 42cf4a2, github.com/apache/spark/pull/5383 |
| |
| [MINOR] [SQL] Removes debugging println |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 23:40:13 +0800 |
| Commit: d78f0e1, github.com/apache/spark/pull/6123 |
| |
| [SQL] In InsertIntoFSBasedRelation.insert, log cause before abort job/task. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-13 23:36:19 +0800 |
| Commit: 9ca28d9, github.com/apache/spark/pull/6105 |
| |
| [SPARK-7599] [SQL] Don't restrict customized output committers to be subclasses of FileOutputCommitter |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 07:35:55 -0700 |
| Commit: cb1fe81, github.com/apache/spark/pull/6118 |
| |
| [SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has space in its path |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp>, Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-13 09:43:40 +0100 |
| Commit: bfdecac, github.com/apache/spark/pull/5447 |
| |
| [SPARK-7526] [SPARKR] Specify ip of RBackend, MonitorServer and RRDD Socket server |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-12 23:55:44 -0700 |
| Commit: 7bd5274, github.com/apache/spark/pull/6053 |
| |
| [SPARK-7482] [SPARKR] Rename some DataFrame API methods in SparkR to match their counterparts in Scala. |
| Sun Rui <rui.sun@intel.com> |
| 2015-05-12 23:52:30 -0700 |
| Commit: b18f1c6, github.com/apache/spark/pull/6007 |
| |
| [SPARK-7566][SQL] Add type to HiveContext.analyzer |
| Santiago M. Mola <santi@mola.io> |
| 2015-05-12 23:44:21 -0700 |
| Commit: 6ff3379, github.com/apache/spark/pull/6086 |
| |
| [SPARK-7321][SQL] Add Column expression for conditional statements (when/otherwise) |
| Reynold Xin <rxin@databricks.com>, kaka1992 <kaka_1992@163.com> |
| 2015-05-12 21:43:34 -0700 |
| Commit: 219a904, github.com/apache/spark/pull/6072 |
| |
| [SPARK-7588] Document all SQL/DataFrame public methods with @since tag |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-12 18:37:02 -0700 |
| Commit: bdd5db9, github.com/apache/spark/pull/6101 |
| |
| [HOTFIX] Use the old Job API to support old Hadoop versions |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-13 08:33:24 +0800 |
| Commit: 2cc3301, github.com/apache/spark/pull/6095 |
| |
| [SPARK-7572] [MLLIB] do not import Param/Params under pyspark.ml |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 17:15:39 -0700 |
| Commit: 08ec1af, github.com/apache/spark/pull/6094 |
| |
| [SPARK-7554] [STREAMING] Throw exception when an active/stopped StreamingContext is used to create DStreams and output operations |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 17:07:21 -0700 |
| Commit: bb81b15, github.com/apache/spark/pull/6099 |
| |
| [SPARK-7528] [MLLIB] make RankingMetrics Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 16:53:47 -0700 |
| Commit: 6c292a2, github.com/apache/spark/pull/6098 |
| |
| [SPARK-7553] [STREAMING] Added methods to maintain a singleton StreamingContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 16:44:14 -0700 |
| Commit: 91fbd93, github.com/apache/spark/pull/6070 |
| |
| [SPARK-7573] [ML] OneVsRest cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-12 16:42:30 -0700 |
| Commit: 612247f, github.com/apache/spark/pull/6097 |
| |
| [SPARK-7557] [ML] [DOC] User guide for spark.ml HashingTF, Tokenizer |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-12 16:39:56 -0700 |
| Commit: d080df1, github.com/apache/spark/pull/6093 |
| |
| [SPARK-7496] [MLLIB] Update Programming guide with Online LDA |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-05-12 15:12:29 -0700 |
| Commit: fe34a59, github.com/apache/spark/pull/6046 |
| |
| [SPARK-7406] [STREAMING] [WEBUI] Add tooltips for "Scheduling Delay", "Processing Time" and "Total Delay" |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-12 14:41:21 -0700 |
| Commit: 221375e, github.com/apache/spark/pull/5952 |
| |
| [SPARK-7571] [MLLIB] rename Math to math |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 14:39:03 -0700 |
| Commit: 2555517, github.com/apache/spark/pull/6092 |
| |
| [SPARK-7484][SQL]Support jdbc connection properties |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-05-12 14:37:23 -0700 |
| Commit: 32819fc, github.com/apache/spark/pull/6009 |
| |
| [SPARK-7559] [MLLIB] Bucketizer should include the right most boundary in the last bucket. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-12 14:24:26 -0700 |
| Commit: 98ccd93, github.com/apache/spark/pull/6075 |
| |
| [SPARK-7569][SQL] Better error for invalid binary expressions |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-12 13:36:55 -0700 |
| Commit: c68485e, github.com/apache/spark/pull/6089 |
| |
| [SPARK-7015] [MLLIB] [WIP] Multiclass to Binary Reduction: One Against All |
| Ram Sriharsha <rsriharsha@hw11853.local> |
| 2015-05-12 13:35:12 -0700 |
| Commit: fd16709, github.com/apache/spark/pull/5830 |
| |
| [SPARK-2018] [CORE] Upgrade LZF library to fix endian serialization pā¦ |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-12 20:48:26 +0100 |
| Commit: eadda92, github.com/apache/spark/pull/6077 |
| |
| [SPARK-7487] [ML] Feature Parity in PySpark for ml.regression |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-12 12:17:05 -0700 |
| Commit: 432694c, github.com/apache/spark/pull/6016 |
| |
| [HOT FIX #6076] DAG visualization: curve the edges |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 12:06:30 -0700 |
| Commit: ce6c400 |
| |
| [SPARK-7276] [DATAFRAME] speed up DataFrame.select by collapsing Project |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 11:51:55 -0700 |
| Commit: 8be43f8, github.com/apache/spark/pull/5831 |
| |
| [SPARK-7500] DAG visualization: move cluster labeling to dagre-d3 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 11:17:59 -0700 |
| Commit: a236104, github.com/apache/spark/pull/6076 |
| |
| [DataFrame][minor] support column in field accessor |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 10:37:57 -0700 |
| Commit: ec89286, github.com/apache/spark/pull/6080 |
| |
| [SPARK-3928] [SPARK-5182] [SQL] Partitioning support for the data sources API |
| Cheng Lian <lian@databricks.com> |
| 2015-05-13 01:32:28 +0800 |
| Commit: d232813, github.com/apache/spark/pull/5526 |
| |
| [DataFrame][minor] cleanup unapply methods in DataTypes |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-12 10:28:40 -0700 |
| Commit: a9d84a9, github.com/apache/spark/pull/6079 |
| |
| [SPARK-6876] [PySpark] [SQL] add DataFrame na.replace in pyspark |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-12 10:23:41 -0700 |
| Commit: 653db0a, github.com/apache/spark/pull/6003 |
| |
| [SPARK-7532] [STREAMING] StreamingContext.start() made to logWarning and not throw exception |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-12 08:48:24 -0700 |
| Commit: 2bbb685, github.com/apache/spark/pull/6060 |
| |
| [SPARK-7467] Dag visualization: treat checkpoint as an RDD operation |
| Andrew Or <andrew@databricks.com> |
| 2015-05-12 01:40:55 -0700 |
| Commit: 5601632, github.com/apache/spark/pull/6004 |
| |
| [SPARK-7485] [BUILD] Remove pyspark files from assembly. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-12 01:39:21 -0700 |
| Commit: afe54b7, github.com/apache/spark/pull/6022 |
| |
| [MINOR] [PYSPARK] Set PYTHONPATH to python/lib/pyspark.zip rather than python/pyspark |
| linweizhong <linweizhong@huawei.com> |
| 2015-05-12 01:36:27 -0700 |
| Commit: 4092a2e, github.com/apache/spark/pull/6047 |
| |
| [SPARK-7534] [CORE] [WEBUI] Fix the Stage table when a stage is missing |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-12 01:34:33 -0700 |
| Commit: af374ed, github.com/apache/spark/pull/6061 |
| |
| [SPARK-6994][SQL] Update docs for fetching Row fields by name |
| vidmantas zemleris <vidmantas@vinted.com> |
| 2015-05-11 22:29:24 -0700 |
| Commit: 6523fb8, github.com/apache/spark/pull/6030 |
| |
| [SQL] Rename Dialect -> ParserDialect. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 22:06:56 -0700 |
| Commit: c6b8148, github.com/apache/spark/pull/6071 |
| |
| [SPARK-7435] [SPARKR] Make DataFrame.show() consistent with that of Scala and pySpark |
| Joshi <rekhajoshm@gmail.com>, Rekha Joshi <rekhajoshm@gmail.com> |
| 2015-05-11 21:02:34 -0700 |
| Commit: 835a770, github.com/apache/spark/pull/5989 |
| |
| [SPARK-7509][SQL] DataFrame.drop in Python for dropping columns. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 20:04:36 -0700 |
| Commit: ed40ab5, github.com/apache/spark/pull/6068 |
| |
| [SPARK-7437] [SQL] Fold "literal in (item1, item2, ..., literal, ...)" into true or false directly |
| Zhongshuai Pei <799203320@qq.com>, DoingDone9 <799203320@qq.com> |
| 2015-05-11 19:22:44 -0700 |
| Commit: c30982d, github.com/apache/spark/pull/5972 |
| |
| [SPARK-7411] [SQL] Support SerDe for HiveQl in CTAS |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-11 19:21:16 -0700 |
| Commit: 1a664a0, github.com/apache/spark/pull/5963 |
| |
| [SPARK-7324] [SQL] DataFrame.dropDuplicates |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 19:15:14 -0700 |
| Commit: 8a9d234, github.com/apache/spark/pull/6066 |
| |
| [SPARK-7530] [STREAMING] Added StreamingContext.getState() to expose the current state of the context |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-11 18:53:50 -0700 |
| Commit: c16b47f, github.com/apache/spark/pull/6058 |
| |
| [SPARK-5893] [ML] Add bucketizer |
| Xusen Yin <yinxusen@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-11 18:41:22 -0700 |
| Commit: f188815, github.com/apache/spark/pull/5980 |
| |
| Updated DataFrame.saveAsTable Hive warning to include SPARK-7550 ticket. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 18:10:45 -0700 |
| Commit: e1e599d, github.com/apache/spark/pull/6067 |
| |
| [SPARK-7462][SQL] Update documentation for retaining grouping columns in DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 18:07:12 -0700 |
| Commit: eaa6116, github.com/apache/spark/pull/6062 |
| |
| [SPARK-7084] improve saveAsTable documentation |
| madhukar <phatak.dev@gmail.com> |
| 2015-05-11 17:04:11 -0700 |
| Commit: 0dbfe16, github.com/apache/spark/pull/5654 |
| |
| [SQL] Show better error messages for incorrect join types in DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-11 17:02:11 -0700 |
| Commit: 0ff34f80, github.com/apache/spark/pull/6064 |
| |
| Update Documentation: leftsemi instead of semijoin |
| LCY Vincent <lauchunyin@gmail.com> |
| 2015-05-11 14:48:10 -0700 |
| Commit: 788503a, github.com/apache/spark/pull/5944 |
| |
| [STREAMING] [MINOR] Close files correctly when iterator is finished in streaming WAL recovery |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-11 14:38:58 -0700 |
| Commit: 9e226e1, github.com/apache/spark/pull/6050 |
| |
| [SPARK-7516] [Minor] [DOC] Replace depreciated inferSchema() with createDataFrame() |
| gchen <chenguancheng@gmail.com> |
| 2015-05-11 14:37:18 -0700 |
| Commit: 1538b10, github.com/apache/spark/pull/6041 |
| |
| [SPARK-7508] JettyUtils-generated servlets to log & report all errors |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-05-11 13:35:06 -0700 |
| Commit: 779174a, github.com/apache/spark/pull/6033 |
| |
| [SPARK-7462] By default retain group by columns in aggregate |
| Reynold Xin <rxin@databricks.com>, Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-11 11:35:16 -0700 |
| Commit: 9c35f02, github.com/apache/spark/pull/5996 |
| |
| [SPARK-7361] [STREAMING] Throw unambiguous exception when attempting to start multiple StreamingContexts in the same JVM |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-11 10:58:56 -0700 |
| Commit: 11648fa, github.com/apache/spark/pull/5907 |
| |
| [SPARK-7522] [EXAMPLES] Removed angle brackets from dataFormat option |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-05-11 09:23:47 -0700 |
| Commit: c234d78, github.com/apache/spark/pull/6049 |
| |
| [SPARK-6092] [MLLIB] Add RankingMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-11 09:14:20 -0700 |
| Commit: 017f9fa, github.com/apache/spark/pull/6044 |
| |
| [SPARK-7326] [STREAMING] Performing window() on a WindowedDStream doesn't work all the time |
| Wesley Miao <wesley.miao@gmail.com>, Wesley <wesley.miao@autodesk.com> |
| 2015-05-11 12:20:06 +0100 |
| Commit: da1be15, github.com/apache/spark/pull/5871 |
| |
| [SPARK-7519] [SQL] fix minor bugs in thrift server UI |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-05-11 14:08:15 +0800 |
| Commit: fff3c86, github.com/apache/spark/pull/6048 |
| |
| [SPARK-7512] [SPARKR] Fix RDD's show method to use getJRDD |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-10 19:49:42 -0700 |
| Commit: 5f227fd, github.com/apache/spark/pull/6035 |
| |
| [SPARK-7427] [PYSPARK] Make sharedParams match in Scala, Python |
| Glenn Weidner <gweidner@us.ibm.com> |
| 2015-05-10 19:18:32 -0700 |
| Commit: 051864e, github.com/apache/spark/pull/6023 |
| |
| [SPARK-5521] PCA wrapper for easy transform vectors |
| Kirill A. Korinskiy <catap@catap.ru>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-10 13:34:00 -0700 |
| Commit: 193ff69, github.com/apache/spark/pull/4304 |
| |
| [SPARK-7431] [ML] [PYTHON] Made CrossValidatorModel call parent init in PySpark |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-10 13:29:27 -0700 |
| Commit: d49b72c, github.com/apache/spark/pull/5968 |
| |
| [MINOR] [SQL] Fixes variable name typo |
| Cheng Lian <lian@databricks.com> |
| 2015-05-10 21:26:36 +0800 |
| Commit: fd87b2a, github.com/apache/spark/pull/6038 |
| |
| [SPARK-7345][SQL] Spark cannot detect renamed columns using JDBC connector |
| Oleg Sidorkin <oleg.sidorkin@gmail.com> |
| 2015-05-10 01:31:34 -0700 |
| Commit: 5c40403, github.com/apache/spark/pull/6032 |
| |
| [SPARK-6091] [MLLIB] Add MulticlassMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-10 00:57:14 -0700 |
| Commit: fe46374, github.com/apache/spark/pull/6011 |
| |
| [SPARK-7475] [MLLIB] adjust ldaExample for online LDA |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-05-09 15:40:46 -0700 |
| Commit: e96fc86, github.com/apache/spark/pull/6000 |
| |
| [BUILD] Reference fasterxml.jackson.version in sql/core/pom.xml |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-09 13:19:07 -0700 |
| Commit: 5110f3e, github.com/apache/spark/pull/6031 |
| |
| Upgrade version of jackson-databind in sql/core/pom.xml |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-09 10:41:30 -0700 |
| Commit: 6c5b9ff, github.com/apache/spark/pull/6028 |
| |
| [STREAMING] [DOCS] Fix wrong url about API docs of StreamingListener |
| dobashim <dobashim@oss.nttdata.co.jp> |
| 2015-05-09 10:14:46 +0100 |
| Commit: 5dbc7bb, github.com/apache/spark/pull/6024 |
| |
| [SPARK-7403] [WEBUI] Link URL in objects on Timeline View is wrong in case of running on YARN |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-09 10:10:29 +0100 |
| Commit: 869a52d, github.com/apache/spark/pull/5947 |
| |
| [SPARK-7438] [SPARK CORE] Fixed validation of relativeSD in countApproxDistinct |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-05-09 10:03:15 +0100 |
| Commit: b0460f4, github.com/apache/spark/pull/5974 |
| |
| [SPARK-7498] [ML] removed varargs annotation from Params.setDefaults |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-08 21:55:54 -0700 |
| Commit: 25972d3, github.com/apache/spark/pull/6021 |
| |
| [SPARK-7262] [ML] Binary LogisticRegression with L1/L2 (elastic net) using OWLQN in new ML package |
| DB Tsai <dbt@netflix.com> |
| 2015-05-08 21:43:05 -0700 |
| Commit: 80bbe72, github.com/apache/spark/pull/5967 |
| |
| [SPARK-7375] [SQL] Avoid row copying in exchange when sort.serializeMapOutputs takes effect |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-08 22:09:55 -0400 |
| Commit: 21212a2, github.com/apache/spark/pull/5948 |
| |
| [SPARK-7231] [SPARKR] Changes to make SparkR DataFrame dplyr friendly. |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-08 18:29:57 -0700 |
| Commit: 448ff33, github.com/apache/spark/pull/6005 |
| |
| [SPARK-7451] [YARN] Preemption of executors is counted as failure causing Spark job to fail |
| Ashwin Shankar <ashankar@netflix.com> |
| 2015-05-08 17:51:00 -0700 |
| Commit: 959c7b6, github.com/apache/spark/pull/5993 |
| |
| [SPARK-7488] [ML] Feature Parity in PySpark for ml.recommendation |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-08 17:24:32 -0700 |
| Commit: 85cab34, github.com/apache/spark/pull/6015 |
| |
| [SPARK-7237] Clean function in several RDD methods |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-08 17:16:38 -0700 |
| Commit: 45b6215, github.com/apache/spark/pull/5959 |
| |
| [SPARK-7469] [SQL] DAG visualization: show SQL query operators |
| Andrew Or <andrew@databricks.com> |
| 2015-05-08 17:15:10 -0700 |
| Commit: cafffd0, github.com/apache/spark/pull/5999 |
| |
| [SPARK-6955] Perform port retries at NettyBlockTransferService level |
| Aaron Davidson <aaron@databricks.com> |
| 2015-05-08 17:13:55 -0700 |
| Commit: 1eae476, github.com/apache/spark/pull/5575 |
| |
| updated ec2 instance types |
| Brendan Collins <bcollins@blueraster.com> |
| 2015-05-08 15:59:34 -0700 |
| Commit: 6e35cb5, github.com/apache/spark/pull/6014 |
| |
| [SPARK-5913] [MLLIB] Python API for ChiSqSelector |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-08 15:48:39 -0700 |
| Commit: ab48df3, github.com/apache/spark/pull/5939 |
| |
| [SPARK-4699] [SQL] Make caseSensitive configurable in spark sql analyzer |
| Jacky Li <jacky.likun@huawei.com>, wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-05-08 15:25:54 -0700 |
| Commit: 21bd722, github.com/apache/spark/pull/5806 |
| |
| [SPARK-7390] [SQL] Only merge other CovarianceCounter when its count is greater than zero |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-08 14:41:16 -0700 |
| Commit: 5205eb4, github.com/apache/spark/pull/5931 |
| |
| [SPARK-7378] [CORE] Handle deep links to unloaded apps. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-08 14:12:58 -0700 |
| Commit: 3024f6b, github.com/apache/spark/pull/5922 |
| |
| [MINOR] [CORE] Allow History Server to read kerberos opts from config file. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-08 14:10:27 -0700 |
| Commit: 3da5f8b, github.com/apache/spark/pull/5998 |
| |
| [SPARK-7466] DAG visualization: fix orphan nodes |
| Andrew Or <andrew@databricks.com> |
| 2015-05-08 14:09:39 -0700 |
| Commit: ca2f1c5, github.com/apache/spark/pull/6002 |
| |
| [MINOR] Defeat early garbage collection of test suite variable |
| Tim Ellison <t.p.ellison@gmail.com> |
| 2015-05-08 14:08:52 -0700 |
| Commit: f734c58, github.com/apache/spark/pull/6010 |
| |
| [SPARK-7489] [SPARK SHELL] Spark shell crashes when compiled with scala 2.11 |
| vinodkc <vinod.kc.in@gmail.com> |
| 2015-05-08 14:07:53 -0700 |
| Commit: 3b7fb7a, github.com/apache/spark/pull/6013 |
| |
| [WEBUI] Remove debug feature for vis.js |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-08 14:06:37 -0700 |
| Commit: 1dde3b3, github.com/apache/spark/pull/5994 |
| |
| [MINOR] Ignore python/lib/pyspark.zip |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-08 14:06:02 -0700 |
| Commit: ab0caa0, github.com/apache/spark/pull/6017 |
| |
| [SPARK-7490] [CORE] [Minor] MapOutputTracker.deserializeMapStatuses: close input streams |
| Evan Jones <ejones@twitter.com> |
| 2015-05-08 22:00:39 +0100 |
| Commit: 6230809, github.com/apache/spark/pull/5982 |
| |
| [SPARK-6627] Finished rename to ShuffleBlockResolver |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-05-08 12:24:06 -0700 |
| Commit: 82be68f, github.com/apache/spark/pull/5764 |
| |
| [SPARK-7133] [SQL] Implement struct, array, and map field accessor |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-08 11:49:38 -0700 |
| Commit: f8468c4, github.com/apache/spark/pull/5744 |
| |
| [SPARK-7298] Harmonize style of new visualizations |
| Matei Zaharia <matei@databricks.com> |
| 2015-05-08 14:41:42 -0400 |
| Commit: 0b2c252, github.com/apache/spark/pull/5942 |
| |
| [SPARK-7436] Fixed instantiation of custom recovery mode factory and added tests |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-05-08 11:38:09 -0700 |
| Commit: 89d9487, github.com/apache/spark/pull/5976 |
| |
| [SPARK-6824] Fill the docs for DataFrame API in SparkR |
| hqzizania <qian.huang@intel.com>, qhuang <qian.huang@intel.com> |
| 2015-05-08 11:25:04 -0700 |
| Commit: 4f01f5b, github.com/apache/spark/pull/5969 |
| |
| [SPARK-7474] [MLLIB] update ParamGridBuilder doctest |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-08 11:16:04 -0700 |
| Commit: 75fed0c, github.com/apache/spark/pull/6001 |
| |
| [SPARK-7383] [ML] Feature Parity in PySpark for ml.features |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-08 11:14:39 -0700 |
| Commit: 85e1154, github.com/apache/spark/pull/5991 |
| |
| [SPARK-3454] separate json endpoints for data in the UI |
| Imran Rashid <irashid@cloudera.com> |
| 2015-05-08 16:54:32 +0100 |
| Commit: 532bfda, github.com/apache/spark/pull/5940 |
| |
| [SPARK-6869] [PYSPARK] Add pyspark archives path to PYTHONPATH |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-05-08 08:44:46 -0500 |
| Commit: acf4bc1, github.com/apache/spark/pull/5580 |
| |
| [SPARK-7392] [CORE] bugfix: Kryo buffer size cannot be larger than 2M |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-05-08 09:10:58 +0100 |
| Commit: f5e9678, github.com/apache/spark/pull/5934 |
| |
| [SPARK-7232] [SQL] Add a Substitution batch for spark sql analyzer |
| wangfei <wangfei1@huawei.com> |
| 2015-05-07 22:55:42 -0700 |
| Commit: bb5872f, github.com/apache/spark/pull/5776 |
| |
| [SPARK-7470] [SQL] Spark shell SQLContext crashes without hive |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 22:32:13 -0700 |
| Commit: 1a3e9e9, github.com/apache/spark/pull/5997 |
| |
| [SPARK-6986] [SQL] Use Serializer2 in more cases. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-07 20:59:42 -0700 |
| Commit: 9d0d289, github.com/apache/spark/pull/5849 |
| |
| [SPARK-7452] [MLLIB] fix bug in topBykey and update test |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-05-07 20:55:08 -0700 |
| Commit: 28d4238, github.com/apache/spark/pull/5990 |
| |
| [SPARK-6908] [SQL] Use isolated Hive client |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-07 19:36:24 -0700 |
| Commit: 05454fd, github.com/apache/spark/pull/5876 |
| |
| [SPARK-7305] [STREAMING] [WEBUI] Make BatchPage show friendly information when jobs are dropped by SparkListener |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-07 17:34:44 -0700 |
| Commit: 2e8a141, github.com/apache/spark/pull/5840 |
| |
| [SPARK-7450] Use UNSAFE.getLong() to speed up BitSetMethods#anySet() |
| tedyu <yuzhihong@gmail.com> |
| 2015-05-07 16:53:59 -0700 |
| Commit: 99897fe, github.com/apache/spark/pull/5897 |
| |
| [SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for "CASE a WHEN b THEN c * END" |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-05-07 16:26:49 -0700 |
| Commit: 622a0c5, github.com/apache/spark/pull/5979 |
| |
| [SPARK-5281] [SQL] Registering table on RDD is giving MissingRequirementError |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-05-07 16:24:11 -0700 |
| Commit: 9fd25f7, github.com/apache/spark/pull/5981 |
| |
| [SPARK-7277] [SQL] Throw exception if the property mapred.reduce.tasks is set to -1 |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-07 16:22:45 -0700 |
| Commit: 7064ea0, github.com/apache/spark/pull/5811 |
| |
| [SQL] [MINOR] make star and multialias extend NamedExpression |
| scwf <wangfei1@huawei.com> |
| 2015-05-07 16:21:24 -0700 |
| Commit: 2425e4d, github.com/apache/spark/pull/5928 |
| |
| [SPARK-6948] [MLLIB] compress vectors in VectorAssembler |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-07 15:45:37 -0700 |
| Commit: 475143a, github.com/apache/spark/pull/5985 |
| |
| [SPARK-5726] [MLLIB] Elementwise (Hadamard) Vector Product Transformer |
| Octavian Geagla <ogeagla@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 14:49:55 -0700 |
| Commit: 76e58b5, github.com/apache/spark/pull/4580 |
| |
| [SPARK-7328] [MLLIB] [PYSPARK] Pyspark.mllib.linalg.Vectors: Missing items |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-07 14:02:05 -0700 |
| Commit: 4436e26, github.com/apache/spark/pull/5872 |
| |
| [SPARK-7347] DAG visualization: add tooltips to RDDs |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 12:29:56 -0700 |
| Commit: 1b742a4, github.com/apache/spark/pull/5957 |
| |
| [SPARK-7391] DAG visualization: auto expand if linked from another viz |
| Andrew Or <andrew@databricks.com> |
| 2015-05-07 12:29:18 -0700 |
| Commit: 800c0fc, github.com/apache/spark/pull/5958 |
| |
| [SPARK-7373] [MESOS] Add docker support for launching drivers in mesos cluster mode. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-05-07 12:23:16 -0700 |
| Commit: 226033c, github.com/apache/spark/pull/5917 |
| |
| [SPARK-7399] [SPARK CORE] Fixed compilation error in scala 2.11 |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-05-07 12:21:09 -0700 |
| Commit: d4e31bf, github.com/apache/spark/pull/5966 |
| |
| [SPARK-5213] [SQL] Remove the duplicated SparkSQLParser |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-07 12:09:54 -0700 |
| Commit: 2b0c423, github.com/apache/spark/pull/5965 |
| |
| [SPARK-7116] [SQL] [PYSPARK] Remove cache() causing memory leak |
| ksonj <kson@siberie.de> |
| 2015-05-07 12:04:19 -0700 |
| Commit: 86f141c, github.com/apache/spark/pull/5973 |
| |
| [SPARK-1442] [SQL] [FOLLOW-UP] Address minor comments in Window Function PR (#5604). |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-07 11:46:49 -0700 |
| Commit: 9dcf4f7, github.com/apache/spark/pull/5945 |
| |
| [SPARK-6093] [MLLIB] Add RegressionMetrics in PySpark/MLlib |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-05-07 11:18:32 -0700 |
| Commit: ef835dc, github.com/apache/spark/pull/5941 |
| |
| [SPARK-7118] [Python] Add the coalesce Spark SQL function available in PySpark |
| Olivier Girardot <o.girardot@lateral-thoughts.com> |
| 2015-05-07 10:58:35 -0700 |
| Commit: 3038b26, github.com/apache/spark/pull/5698 |
| |
| [SPARK-7388] [SPARK-7383] wrapper for VectorAssembler in Python |
| Burak Yavuz <brkyvz@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-05-07 10:25:41 -0700 |
| Commit: 6b9737a, github.com/apache/spark/pull/5930 |
| |
| [SPARK-7330] [SQL] avoid NPE at jdbc rdd |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-07 10:05:01 -0700 |
| Commit: 84ee348, github.com/apache/spark/pull/5877 |
| |
| [SPARK-7429] [ML] Params cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 01:28:44 -0700 |
| Commit: 91ce131, github.com/apache/spark/pull/5960 |
| |
| [SPARK-7421] [MLLIB] OnlineLDA cleanups |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-07 01:12:14 -0700 |
| Commit: a038c51, github.com/apache/spark/pull/5956 |
| |
| [SPARK-7035] Encourage __getitem__ over __getattr__ on column access in the Python DataFrame API |
| ksonj <kson@siberie.de> |
| 2015-05-07 01:02:00 -0700 |
| Commit: b929a75, github.com/apache/spark/pull/5971 |
| |
| [SPARK-7295][SQL] bitwise operations for DataFrame DSL |
| Shiti <ssaxena.ece@gmail.com> |
| 2015-05-07 01:00:29 -0700 |
| Commit: 703211b, github.com/apache/spark/pull/5867 |
| |
| [SPARK-7217] [STREAMING] Add configuration to control the default behavior of StreamingContext.stop() implicitly calling SparkContext.stop() |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-07 00:24:44 -0700 |
| Commit: cb13c98, github.com/apache/spark/pull/5929 |
| |
| [SPARK-7430] [STREAMING] [TEST] General improvements to streaming tests to increase debuggability |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-07 00:21:10 -0700 |
| Commit: 065d114, github.com/apache/spark/pull/5961 |
| |
| [SPARK-5938] [SPARK-5443] [SQL] Improve JsonRDD performance |
| Nathan Howell <nhowell@godaddy.com> |
| 2015-05-06 22:56:53 -0700 |
| Commit: 2337ccc1, github.com/apache/spark/pull/5801 |
| |
| [SPARK-6812] [SPARKR] filter() on DataFrame does not work as expected. |
| Sun Rui <rui.sun@intel.com> |
| 2015-05-06 22:48:16 -0700 |
| Commit: 4948f42, github.com/apache/spark/pull/5938 |
| |
| [SPARK-7432] [MLLIB] disable cv doctest |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-06 22:29:07 -0700 |
| Commit: fb4967b, github.com/apache/spark/pull/5962 |
| |
| [SPARK-7405] [STREAMING] Fix the bug that ReceiverInputDStream doesn't report InputInfo |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-06 18:07:00 -0700 |
| Commit: d6e76cb, github.com/apache/spark/pull/5950 |
| |
| [HOT FIX] For DAG visualization #5954 |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 18:02:08 -0700 |
| Commit: 85a644b |
| |
| [SPARK-7371] [SPARK-7377] [SPARK-7408] DAG visualization addendum (#5729) |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 17:52:34 -0700 |
| Commit: 76e8344, github.com/apache/spark/pull/5954 |
| |
| [SPARK-7396] [STREAMING] [EXAMPLE] Update KafkaWordCountProducer to use new Producer API |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-06 17:44:43 -0700 |
| Commit: ba24dfa, github.com/apache/spark/pull/5936 |
| |
| [SPARK-6799] [SPARKR] Remove SparkR RDD examples, add dataframe examples |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-06 17:28:11 -0700 |
| Commit: 4b91e18, github.com/apache/spark/pull/5949 |
| |
| [HOT FIX] [SPARK-7418] Ignore flaky SparkSubmitUtilsSuite test |
| Andrew Or <andrew@databricks.com> |
| 2015-05-06 17:08:39 -0700 |
| Commit: c0ec20a |
| |
| [SPARK-5995] [ML] Make Prediction dev API public |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-06 16:15:51 -0700 |
| Commit: b681b93, github.com/apache/spark/pull/5913 |
| |
| [HOT-FIX] Move HiveWindowFunctionQuerySuite.scala to hive compatibility dir. |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-06 14:48:25 -0700 |
| Commit: 14bcb84, github.com/apache/spark/pull/5951 |
| |
| Add `Private` annotation. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-06 11:03:17 -0700 |
| Commit: 2163367 |
| |
| [SPARK-7311] Introduce internal Serializer API for determining if serializers support object relocation |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-05-06 10:52:55 -0700 |
| Commit: d651e28, github.com/apache/spark/pull/5924 |
| |
| [SPARK-1442] [SQL] Window Function Support for Spark SQL |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-06 10:43:00 -0700 |
| Commit: b521a3b, github.com/apache/spark/pull/5604 |
| |
| [SPARK-6201] [SQL] promote string and do widen types for IN |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-06 10:30:42 -0700 |
| Commit: 7212897, github.com/apache/spark/pull/4945 |
| |
| [SPARK-5456] [SQL] fix decimal compare for jdbc rdd |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-05-06 10:05:10 -0700 |
| Commit: f1a5caf, github.com/apache/spark/pull/5803 |
| |
| [SQL] JavaDoc update for various DataFrame functions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-06 08:50:56 -0700 |
| Commit: 389b755, github.com/apache/spark/pull/5935 |
| |
| [SPARK-6940] [MLLIB] Add CrossValidator to Python ML pipeline API |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-06 01:28:43 -0700 |
| Commit: 3e27a54, github.com/apache/spark/pull/5926 |
| |
| [SPARK-7384][Core][Tests] Fix flaky tests for distributed mode in BroadcastSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 23:25:28 -0700 |
| Commit: 20f9237, github.com/apache/spark/pull/5925 |
| |
| [SPARK-6267] [MLLIB] Python API for IsotonicRegression |
| Yanbo Liang <ybliang8@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-05-05 22:57:13 -0700 |
| Commit: 384ac3c, github.com/apache/spark/pull/5890 |
| |
| [SPARK-7358][SQL] Move DataFrame mathfunctions into functions |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-05 22:56:01 -0700 |
| Commit: 8aa6681, github.com/apache/spark/pull/5923 |
| |
| [SPARK-6841] [SPARKR] add support for mean, median, stdev etc. |
| qhuang <qian.huang@intel.com> |
| 2015-05-05 20:39:56 -0700 |
| Commit: b5cd7dc, github.com/apache/spark/pull/5446 |
| |
| Revert "[SPARK-3454] separate json endpoints for data in the UI" |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-05 19:28:35 -0700 |
| Commit: 765f6e1 |
| |
| [SPARK-6231][SQL/DF] Automatically resolve join condition ambiguity for self-joins. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-05 18:59:46 -0700 |
| Commit: e61083c, github.com/apache/spark/pull/5919 |
| |
| Some minor cleanup after SPARK-4550. |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 18:32:16 -0700 |
| Commit: 762ff2e, github.com/apache/spark/pull/5916 |
| |
| [SPARK-7230] [SPARKR] Make RDD private in SparkR. |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-05-05 14:40:33 -0700 |
| Commit: 4afb578, github.com/apache/spark/pull/5895 |
| |
| [SQL][Minor] make StringComparison extends ExpectsInputTypes |
| wangfei <wangfei1@huawei.com> |
| 2015-05-05 14:24:37 -0700 |
| Commit: b6566a2, github.com/apache/spark/pull/5905 |
| |
| [SPARK-7351] [STREAMING] [DOCS] Add spark.streaming.ui.retainedBatches to docs |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 13:42:23 -0700 |
| Commit: 4c95fe5, github.com/apache/spark/pull/5899 |
| |
| [SPARK-7294][SQL] ADD BETWEEN |
| äŗ峤 <chensong.cs@alibaba-inc.com>, kaka1992 <kaka_1992@163.com> |
| 2015-05-05 13:23:53 -0700 |
| Commit: c68d0e2, github.com/apache/spark/pull/5839 |
| |
| [SPARK-6939] [STREAMING] [WEBUI] Add timeline and histogram graphs for streaming statistics |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 12:52:16 -0700 |
| Commit: 8109c9e, github.com/apache/spark/pull/5533 |
| |
| [SPARK-5888] [MLLIB] Add OneHotEncoder as a Transformer |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 12:34:02 -0700 |
| Commit: 94ac9eb, github.com/apache/spark/pull/5500 |
| |
| [SPARK-7333] [MLLIB] Add BinaryClassificationEvaluator to PySpark |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-05 11:45:37 -0700 |
| Commit: dfb6bfc, github.com/apache/spark/pull/5885 |
| |
| [SPARK-7243][SQL] Reduce size for Contingency Tables in DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-05 11:01:25 -0700 |
| Commit: 598902b, github.com/apache/spark/pull/5900 |
| |
| [SPARK-7007] [CORE] Add a metric source for ExecutorAllocationManager |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-05 09:43:49 -0700 |
| Commit: 29350ee, github.com/apache/spark/pull/5589 |
| |
| [SPARK-7318] [STREAMING] DStream cleans objects that are not closures |
| Andrew Or <andrew@databricks.com> |
| 2015-05-05 09:37:49 -0700 |
| Commit: acc877a, github.com/apache/spark/pull/5860 |
| |
| [SPARK-7237] Many user provided closures are not actually cleaned |
| Andrew Or <andrew@databricks.com> |
| 2015-05-05 09:37:04 -0700 |
| Commit: 01d4022, github.com/apache/spark/pull/5787 |
| |
| [SPARK-6612] [MLLIB] [PYSPARK] Python KMeans parity |
| Hrishikesh Subramonian <hrishikesh.subramonian@flytxt.com> |
| 2015-05-05 07:57:39 -0700 |
| Commit: 8b63103, github.com/apache/spark/pull/5647 |
| |
| [SPARK-7202] [MLLIB] [PYSPARK] Add SparseMatrixPickler to SerDe |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-05-05 07:53:11 -0700 |
| Commit: cd55e9a, github.com/apache/spark/pull/5775 |
| |
| [SPARK-7350] [STREAMING] [WEBUI] Attach the Streaming tab when calling ssc.start() |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 15:09:58 +0100 |
| Commit: 49923f7, github.com/apache/spark/pull/5898 |
| |
| [SPARK-5074] [CORE] [TESTS] Fix the flakey test 'run shuffle with map stage failure' in DAGSchedulerSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 15:04:14 +0100 |
| Commit: 6f35dac, github.com/apache/spark/pull/5903 |
| |
| [MINOR] Minor update for document |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-05 14:44:02 +0100 |
| Commit: d288322, github.com/apache/spark/pull/5906 |
| |
| [SPARK-3454] separate json endpoints for data in the UI |
| Imran Rashid <irashid@cloudera.com> |
| 2015-05-05 07:25:40 -0500 |
| Commit: ff8b449, github.com/apache/spark/pull/4435 |
| |
| [SPARK-5112] Expose SizeEstimator as a developer api |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-05 12:38:46 +0100 |
| Commit: 0327ca2, github.com/apache/spark/pull/3913 |
| |
| [SPARK-6653] [YARN] New config to specify port for sparkYarnAM actor system |
| shekhar.bansal <shekhar.bansal@guavus.com> |
| 2015-05-05 11:09:51 +0100 |
| Commit: 93af96a, github.com/apache/spark/pull/5719 |
| |
| [SPARK-7341] [STREAMING] [TESTS] Fix the flaky test: org.apache.spark.stre... |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-05 02:15:39 -0700 |
| Commit: 0634510, github.com/apache/spark/pull/5891 |
| |
| [SPARK-7113] [STREAMING] Support input information reporting for Direct Kafka stream |
| jerryshao <saisai.shao@intel.com> |
| 2015-05-05 02:01:06 -0700 |
| Commit: becdb81, github.com/apache/spark/pull/5879 |
| |
| [HOTFIX] [TEST] Ignoring flaky tests |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-05 01:58:51 -0700 |
| Commit: e8f847a, github.com/apache/spark/pull/5901 |
| |
| [SPARK-7139] [STREAMING] Allow received block metadata to be saved to WAL and recovered on driver failure |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-05 01:45:19 -0700 |
| Commit: ae27c0e, github.com/apache/spark/pull/5732 |
| |
| [MINOR] [BUILD] Declare ivy dependency in root pom. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-05 08:56:16 +0100 |
| Commit: 5160437, github.com/apache/spark/pull/5893 |
| |
| [SPARK-7314] [SPARK-3524] [PYSPARK] upgrade Pyrolite to 4.4 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-04 23:52:42 -0700 |
| Commit: 21ed108, github.com/apache/spark/pull/5850 |
| |
| [SPARK-7236] [CORE] Fix to prevent AkkaUtils askWithReply from sleeping on final attempt |
| Bryan Cutler <bjcutler@us.ibm.com> |
| 2015-05-04 18:29:22 -0700 |
| Commit: 48655d1, github.com/apache/spark/pull/5896 |
| |
| [SPARK-7266] Add ExpectsInputTypes to expressions when possible. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-04 18:03:07 -0700 |
| Commit: 1388a46, github.com/apache/spark/pull/5796 |
| |
| [SPARK-7243][SQL] Contingency Tables for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-04 17:02:49 -0700 |
| Commit: ecf0d8a, github.com/apache/spark/pull/5842 |
| |
| [SPARK-6943] [SPARK-6944] DAG visualization on SparkUI |
| Andrew Or <andrew@databricks.com> |
| 2015-05-04 16:21:36 -0700 |
| Commit: 863ec0c, github.com/apache/spark/pull/5729 |
| |
| [SPARK-7319][SQL] Improve the output from DataFrame.show() |
| äŗ峤 <chensong.cs@alibaba-inc.com> |
| 2015-05-04 12:08:38 -0700 |
| Commit: 34edaa8, github.com/apache/spark/pull/5865 |
| |
| [SPARK-5956] [MLLIB] Pipeline components should be copyable. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-04 11:28:59 -0700 |
| Commit: 893b310, github.com/apache/spark/pull/5820 |
| |
| [SPARK-5100] [SQL] add webui for thriftserver |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-05-04 16:59:34 +0800 |
| Commit: 343d3bf, github.com/apache/spark/pull/5730 |
| |
| [SPARK-5563] [MLLIB] LDA with online variational inference |
| Yuhao Yang <hhbyyh@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-05-04 00:06:25 -0700 |
| Commit: 3539cb7, github.com/apache/spark/pull/4419 |
| |
| [SPARK-7241] Pearson correlation for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-03 21:44:39 -0700 |
| Commit: 9646018, github.com/apache/spark/pull/5858 |
| |
| [SPARK-7329] [MLLIB] simplify ParamGridBuilder impl |
| Xiangrui Meng <meng@databricks.com> |
| 2015-05-03 18:06:48 -0700 |
| Commit: 1ffa8cb, github.com/apache/spark/pull/5873 |
| |
| [SPARK-7302] [DOCS] SPARK building documentation still mentions building for yarn 0.23 |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-03 21:22:31 +0100 |
| Commit: 9e25b09, github.com/apache/spark/pull/5863 |
| |
| [SPARK-6907] [SQL] Isolated client for HiveMetastore |
| Michael Armbrust <michael@databricks.com> |
| 2015-05-03 13:12:50 -0700 |
| Commit: daa70bf, github.com/apache/spark/pull/5851 |
| |
| [SPARK-7022] [PYSPARK] [ML] Add ML.Tuning.ParamGridBuilder to PySpark |
| Omede Firouz <ofirouz@palantir.com>, Omede <omedefirouz@gmail.com> |
| 2015-05-03 11:42:02 -0700 |
| Commit: f4af925, github.com/apache/spark/pull/5601 |
| |
| [SPARK-7031] [THRIFTSERVER] let thrift server take SPARK_DAEMON_MEMORY and SPARK_DAEMON_JAVA_OPTS |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-05-03 00:47:47 +0100 |
| Commit: 49549d5, github.com/apache/spark/pull/5609 |
| |
| [SPARK-7255] [STREAMING] [DOCUMENTATION] Added documentation for spark.streaming.kafka.maxRetries |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-05-02 23:41:14 +0100 |
| Commit: ea841ef, github.com/apache/spark/pull/5808 |
| |
| [SPARK-5213] [SQL] Pluggable SQL Parser Support |
| Cheng Hao <hao.cheng@intel.com>, scwf <wangfei1@huawei.com> |
| 2015-05-02 15:20:07 -0700 |
| Commit: 5d6b90d, github.com/apache/spark/pull/5827 |
| |
| [MINOR] [HIVE] Fix QueryPartitionSuite. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-02 23:10:35 +0100 |
| Commit: 82c8c37, github.com/apache/spark/pull/5854 |
| |
| [SPARK-6030] [CORE] Using simulated field layout method to compute class shellSize |
| Ye Xianjin <advancedxy@gmail.com> |
| 2015-05-02 23:08:09 +0100 |
| Commit: bfcd528, github.com/apache/spark/pull/4783 |
| |
| [SPARK-7323] [SPARK CORE] Use insertAll instead of insert while merging combiners in reducer |
| Mridul Muralidharan <mridulm@yahoo-inc.com> |
| 2015-05-02 23:05:51 +0100 |
| Commit: da30352, github.com/apache/spark/pull/5862 |
| |
| [SPARK-3444] Fix typo in Dataframes.py introduced in [] |
| Dean Chen <deanchen5@gmail.com> |
| 2015-05-02 23:04:13 +0100 |
| Commit: 856a571, github.com/apache/spark/pull/5866 |
| |
| [SPARK-7315] [STREAMING] [TEST] Fix flaky WALBackedBlockRDDSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-05-02 01:53:14 -0700 |
| Commit: ecc6eb5, github.com/apache/spark/pull/5853 |
| |
| [SPARK-7120] [SPARK-7121] Closure cleaner nesting + documentation + tests |
| Andrew Or <andrew@databricks.com> |
| 2015-05-01 23:57:58 -0700 |
| Commit: 7394e7a, github.com/apache/spark/pull/5685 |
| |
| [SPARK-7242] added python api for freqItems in DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-01 23:43:24 -0700 |
| Commit: 2e0f357, github.com/apache/spark/pull/5859 |
| |
| [SPARK-7317] [Shuffle] Expose shuffle handle |
| Mridul Muralidharan <mridulm@yahoo-inc.com> |
| 2015-05-01 21:23:42 -0700 |
| Commit: b79aeb9, github.com/apache/spark/pull/5857 |
| |
| [SPARK-6229] Add SASL encryption to network library. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-01 19:01:46 -0700 |
| Commit: 38d4e9e, github.com/apache/spark/pull/5377 |
| |
| [SPARK-2691] [MESOS] Support for Mesos DockerInfo |
| Chris Heller <hellertime@gmail.com> |
| 2015-05-01 18:41:22 -0700 |
| Commit: 8f50a07, github.com/apache/spark/pull/3074 |
| |
| [SPARK-6443] [SPARK SUBMIT] Could not submit app in standalone cluster mode when HA is enabled |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-05-01 18:38:20 -0700 |
| Commit: b4b43df, github.com/apache/spark/pull/5116 |
| |
| [SPARK-7216] [MESOS] Add driver details page to Mesos cluster UI. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-05-01 18:36:42 -0700 |
| Commit: 2022193, github.com/apache/spark/pull/5763 |
| |
| [SPARK-6954] [YARN] ExecutorAllocationManager can end up requesting a negative n... |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-05-01 18:32:46 -0700 |
| Commit: 099327d, github.com/apache/spark/pull/5704 |
| |
| [SPARK-3444] Provide an easy way to change log level |
| Holden Karau <holden@pigscanfly.ca> |
| 2015-05-01 18:02:10 -0700 |
| Commit: ae98eec, github.com/apache/spark/pull/5791 |
| |
| [SPARK-2808][Streaming][Kafka] update kafka to 0.8.2 |
| cody koeninger <cody@koeninger.org>, Helena Edelson <helena.edelson@datastax.com> |
| 2015-05-01 17:54:56 -0700 |
| Commit: 4786484, github.com/apache/spark/pull/4537 |
| |
| [SPARK-7112][Streaming][WIP] Add a InputInfoTracker to track all the input streams |
| jerryshao <saisai.shao@intel.com>, Saisai Shao <saisai.shao@intel.com> |
| 2015-05-01 17:46:06 -0700 |
| Commit: b88c275, github.com/apache/spark/pull/5680 |
| |
| [SPARK-7309] [CORE] [STREAMING] Shutdown the thread pools in ReceivedBlockHandler and DAGScheduler |
| zsxwing <zsxwing@gmail.com> |
| 2015-05-01 17:41:55 -0700 |
| Commit: ebc25a4, github.com/apache/spark/pull/5845 |
| |
| [SPARK-6999] [SQL] Remove the infinite recursive method (useless) |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-05-01 19:39:30 -0500 |
| Commit: 98e7045, github.com/apache/spark/pull/5804 |
| |
| [SPARK-7304] [BUILD] Include $@ in call to mvn consistently in make-distribution.sh |
| Rajendra Gokhale (rvgcentos) <rvg@cloudera.com> |
| 2015-05-01 17:01:36 -0700 |
| Commit: e6fb377, github.com/apache/spark/pull/5846 |
| |
| [SPARK-7312][SQL] SPARK-6913 broke jdk6 build |
| Yin Huai <yhuai@databricks.com> |
| 2015-05-01 16:47:00 -0700 |
| Commit: 41c6a44, github.com/apache/spark/pull/5847 |
| |
| Ignore flakey test in SparkSubmitUtilsSuite |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-01 14:42:58 -0700 |
| Commit: 5c1faba |
| |
| [SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-05-01 15:32:09 -0500 |
| Commit: b1f4ca8, github.com/apache/spark/pull/5823 |
| |
| [SPARK-7240][SQL] Single pass covariance calculation for dataframes |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-05-01 13:29:17 -0700 |
| Commit: 4dc8d74, github.com/apache/spark/pull/5825 |
| |
| [SPARK-7281] [YARN] Add option to set AM's lib path in client mode. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-05-01 21:20:46 +0100 |
| Commit: 7b5dd3e, github.com/apache/spark/pull/5813 |
| |
| [SPARK-7213] [YARN] Check for read permissions before copying a Hadoop config file |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-05-01 21:14:16 +0100 |
| Commit: f53a488, github.com/apache/spark/pull/5760 |
| |
| Revert "[SPARK-7224] added mock repository generator for --packages tests" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-01 13:01:43 -0700 |
| Commit: c6d9a42 |
| |
| Revert "[SPARK-7287] enabled fixed test" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-05-01 13:01:14 -0700 |
| Commit: 58d6584 |
| |
| [SPARK-7274] [SQL] Create Column expression for array/struct creation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-05-01 12:49:02 -0700 |
| Commit: 3753776, github.com/apache/spark/pull/5802 |
| |
| [SPARK-7183] [NETWORK] Fix memory leak of TransportRequestHandler.streamIds |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-01 11:59:12 -0700 |
| Commit: 1686032, github.com/apache/spark/pull/5743 |
| |
| [SPARK-6846] [WEBUI] [HOTFIX] return to GET for kill link in UI since YARN AM won't proxy POST |
| Sean Owen <sowen@cloudera.com> |
| 2015-05-01 19:57:37 +0100 |
| Commit: 1262e31, github.com/apache/spark/pull/5837 |
| |
| [SPARK-5854] personalized page rank |
| Dan McClary <dan.mcclary@gmail.com>, dwmclary <dan.mcclary@gmail.com> |
| 2015-05-01 11:55:43 -0700 |
| Commit: 7d42722, github.com/apache/spark/pull/4774 |
| |
| changing persistence engine trait to an abstract class |
| niranda <niranda.perera@gmail.com> |
| 2015-05-01 11:27:45 -0700 |
| Commit: 27de6fe, github.com/apache/spark/pull/5832 |
| |
| Limit help option regex |
| Chris Biow <chris.biow@10gen.com> |
| 2015-05-01 19:26:55 +0100 |
| Commit: c8c481d, github.com/apache/spark/pull/5816 |
| |
| [SPARK-5891] [ML] Add Binarizer ML Transformer |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-05-01 08:31:01 -0700 |
| Commit: 7630213, github.com/apache/spark/pull/5699 |
| |
| [SPARK-3066] [MLLIB] Support recommendAll in matrix factorization model |
| Debasish Das <debasish.das@one.verizon.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-05-01 08:27:46 -0700 |
| Commit: 3b514af, github.com/apache/spark/pull/3098 |
| |
| [SPARK-4705] Handle multiple app attempts event logs, history server. |
| Marcelo Vanzin <vanzin@cloudera.com>, twinkle sachdeva <twinkle@kite.ggn.in.guavus.com>, twinkle.sachdeva <twinkle.sachdeva@guavus.com>, twinkle sachdeva <twinkle.sachdeva@guavus.com> |
| 2015-05-01 09:50:55 -0500 |
| Commit: 3052f49, github.com/apache/spark/pull/5432 |
| |
| [SPARK-3468] [WEBUI] Timeline-View feature |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-05-01 01:39:56 -0700 |
| Commit: 7fe0f3f, github.com/apache/spark/pull/2342 |
| |
| [SPARK-6257] [PYSPARK] [MLLIB] MLlib API missing items in Recommendation |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-30 23:51:00 -0700 |
| Commit: c24aeb6, github.com/apache/spark/pull/5807 |
| |
| [SPARK-7291] [CORE] Fix a flaky test in AkkaRpcEnvSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-30 23:44:33 -0700 |
| Commit: 14b3288, github.com/apache/spark/pull/5822 |
| |
| [SPARK-7287] enabled fixed test |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-30 23:39:58 -0700 |
| Commit: 7cf1eb7, github.com/apache/spark/pull/5826 |
| |
| [SPARK-4550] In sort-based shuffle, store map outputs in serialized form |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-04-30 23:14:14 -0700 |
| Commit: 0a2b15c, github.com/apache/spark/pull/4450 |
| |
| HOTFIX: Disable buggy dependency checker |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-30 22:39:58 -0700 |
| Commit: a9fc505 |
| |
| [SPARK-6479] [BLOCK MANAGER] Create off-heap block storage API |
| Zhan Zhang <zhazhan@gmail.com> |
| 2015-04-30 22:24:31 -0700 |
| Commit: 36a7a68, github.com/apache/spark/pull/5430 |
| |
| [SPARK-7248] implemented random number generators for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-30 21:56:03 -0700 |
| Commit: b5347a4, github.com/apache/spark/pull/5819 |
| |
| [SPARK-7282] [STREAMING] Fix the race conditions in StreamingListenerSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-30 21:32:11 -0700 |
| Commit: 69a739c, github.com/apache/spark/pull/5812 |
| |
| Revert "[SPARK-5213] [SQL] Pluggable SQL Parser Support" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-30 20:33:36 -0700 |
| Commit: beeafcf |
| |
| [SPARK-7123] [SQL] support table.star in sqlcontext |
| scwf <wangfei1@huawei.com> |
| 2015-04-30 18:50:14 -0700 |
| Commit: 473552f, github.com/apache/spark/pull/5690 |
| |
| [SPARK-5213] [SQL] Pluggable SQL Parser Support |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-30 18:49:06 -0700 |
| Commit: 3ba5aaa, github.com/apache/spark/pull/4015 |
| |
| [SPARK-6913][SQL] Fixed "java.sql.SQLException: No suitable driver found" |
| Vyacheslav Baranov <slavik.baranov@gmail.com> |
| 2015-04-30 18:45:14 -0700 |
| Commit: e991255, github.com/apache/spark/pull/5782 |
| |
| [SPARK-7109] [SQL] Push down left side filter for left semi join |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-04-30 18:18:54 -0700 |
| Commit: a0d8a61, github.com/apache/spark/pull/5677 |
| |
| [SPARK-7093] [SQL] Using newPredicate in NestedLoopJoin to enable code generation |
| scwf <wangfei1@huawei.com> |
| 2015-04-30 18:15:56 -0700 |
| Commit: 0797338, github.com/apache/spark/pull/5665 |
| |
| [SPARK-7280][SQL] Add "drop" column/s on a data frame |
| rakeshchalasani <vnit.rakesh@gmail.com> |
| 2015-04-30 17:42:50 -0700 |
| Commit: ee04413, github.com/apache/spark/pull/5818 |
| |
| [SPARK-7242][SQL][MLLIB] Frequent items for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-30 16:40:32 -0700 |
| Commit: 149b3ee, github.com/apache/spark/pull/5799 |
| |
| [SPARK-7279] Removed diffSum which is theoretical zero in LinearRegression and coding formating |
| DB Tsai <dbt@netflix.com> |
| 2015-04-30 16:26:51 -0700 |
| Commit: 1c3e402, github.com/apache/spark/pull/5809 |
| |
| [Build] Enable MiMa checks for SQL |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-30 16:23:01 -0700 |
| Commit: fa01bec, github.com/apache/spark/pull/5727 |
| |
| [SPARK-7267][SQL]Push down Project when it's child is Limit |
| Zhongshuai Pei <799203320@qq.com>, DoingDone9 <799203320@qq.com> |
| 2015-04-30 15:22:13 -0700 |
| Commit: 77cc25f, github.com/apache/spark/pull/5797 |
| |
| [SPARK-7288] Suppress compiler warnings due to use of sun.misc.Unsafe; add facade in front of Unsafe; remove use of Unsafe.setMemory |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-30 15:21:00 -0700 |
| Commit: 07a8620, github.com/apache/spark/pull/5814 |
| |
| [SPARK-7196][SQL] Support precision and scale of decimal type for JDBC |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-30 15:13:43 -0700 |
| Commit: 6702324, github.com/apache/spark/pull/5777 |
| |
| Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-30 14:59:20 -0700 |
| Commit: e0628f2 |
| |
| [SPARK-7207] [ML] [BUILD] Added ml.recommendation, ml.regression to SparkBuild |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-30 14:39:27 -0700 |
| Commit: adbdb19, github.com/apache/spark/pull/5758 |
| |
| [SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-04-30 13:03:23 -0500 |
| Commit: 6c65da6, github.com/apache/spark/pull/4688 |
| |
| [SPARK-7224] added mock repository generator for --packages tests |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-30 10:19:08 -0700 |
| Commit: 7dacc08, github.com/apache/spark/pull/5790 |
| |
| [HOTFIX] Disabling flaky test (fix in progress as part of SPARK-7224) |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-30 01:02:33 -0700 |
| Commit: 47bf406 |
| |
| [SPARK-1406] Mllib pmml model export |
| Vincenzo Selvaggio <vselvaggio@hotmail.it>, Xiangrui Meng <meng@databricks.com>, selvinsource <vselvaggio@hotmail.it> |
| 2015-04-29 23:21:21 -0700 |
| Commit: 254e050, github.com/apache/spark/pull/3062 |
| |
| [SPARK-7225][SQL] CombineLimits optimizer does not work |
| Zhongshuai Pei <799203320@qq.com>, DoingDone9 <799203320@qq.com> |
| 2015-04-29 22:44:14 -0700 |
| Commit: 4459514, github.com/apache/spark/pull/5770 |
| |
| Some code clean up. |
| DB Tsai <dbt@netflix.com> |
| 2015-04-29 21:44:41 -0700 |
| Commit: ba49eb1, github.com/apache/spark/pull/5794 |
| |
| [SPARK-7156][SQL] Addressed follow up comments for randomSplit |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-29 19:13:47 -0700 |
| Commit: 5553198, github.com/apache/spark/pull/5795 |
| |
| [SPARK-7234][SQL] Fix DateType mismatch when codegen on. |
| äŗ峤 <chensong.cs@alibaba-inc.com> |
| 2015-04-29 18:23:42 -0700 |
| Commit: 7143f6e, github.com/apache/spark/pull/5778 |
| |
| [SPARK-6862] [STREAMING] [WEBUI] Add BatchPage to display details of a batch |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-29 18:22:14 -0700 |
| Commit: 1b7106b, github.com/apache/spark/pull/5473 |
| |
| [SPARK-7176] [ML] Add validation functionality to Param |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-29 17:26:46 -0700 |
| Commit: 114bad6, github.com/apache/spark/pull/5740 |
| |
| [SQL] [Minor] Print detail query execution info when spark answer is not right |
| wangfei <wangfei1@huawei.com> |
| 2015-04-29 17:00:24 -0700 |
| Commit: 1fdfdb4, github.com/apache/spark/pull/5774 |
| |
| [SPARK-7259] [ML] VectorIndexer: do not copy non-ML metadata to output column |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-29 16:35:17 -0700 |
| Commit: b1ef6a6, github.com/apache/spark/pull/5789 |
| |
| [SPARK-7229] [SQL] SpecificMutableRow should take integer type as internal representation for Date |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-29 16:23:34 -0700 |
| Commit: f8cbb0a, github.com/apache/spark/pull/5772 |
| |
| [SPARK-7155] [CORE] Allow newAPIHadoopFile to support comma-separated list of files as input |
| yongtang <yongtang@users.noreply.github.com> |
| 2015-04-29 23:55:51 +0100 |
| Commit: 3fc6cfd, github.com/apache/spark/pull/5708 |
| |
| [SPARK-7181] [CORE] fix inifite loop in Externalsorter's mergeWithAggregation |
| Qiping Li <liqiping1991@gmail.com> |
| 2015-04-29 23:52:16 +0100 |
| Commit: 7f4b583, github.com/apache/spark/pull/5737 |
| |
| [SPARK-7156][SQL] support RandomSplit in DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-29 15:34:05 -0700 |
| Commit: d7dbce8, github.com/apache/spark/pull/5761 |
| |
| [SPARK-6529] [ML] Add Word2Vec transformer |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-04-29 14:55:32 -0700 |
| Commit: c9d530e, github.com/apache/spark/pull/5596 |
| |
| [SPARK-7222] [ML] Added mathematical derivation in comment and compressed the model, removed the correction terms in LinearRegression with ElasticNet |
| DB Tsai <dbt@netflix.com> |
| 2015-04-29 14:53:37 -0700 |
| Commit: 15995c8, github.com/apache/spark/pull/5767 |
| |
| [SPARK-6629] cancelJobGroup() may not work for jobs whose job groups are inherited from parent threads |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-29 13:31:52 -0700 |
| Commit: 3a180c1, github.com/apache/spark/pull/5288 |
| |
| [SPARK-6752] [STREAMING] [REOPENED] Allow StreamingContext to be recreated from checkpoint and existing SparkContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-04-29 13:10:31 -0700 |
| Commit: a9c4e29, github.com/apache/spark/pull/5773 |
| |
| [SPARK-7056] [STREAMING] Make the Write Ahead Log pluggable |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-04-29 13:06:11 -0700 |
| Commit: 1868bd4, github.com/apache/spark/pull/5645 |
| |
| Fix a typo of "threshold" |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-04-29 10:13:48 -0700 |
| Commit: c0c0ba6, github.com/apache/spark/pull/5769 |
| |
| [SQL][Minor] fix java doc for DataFrame.agg |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-04-29 09:49:24 -0700 |
| Commit: 81ea42b, github.com/apache/spark/pull/5712 |
| |
| Better error message on access to non-existing attribute |
| ksonj <kson@siberie.de> |
| 2015-04-29 09:48:47 -0700 |
| Commit: 3df9c5d, github.com/apache/spark/pull/5771 |
| |
| [SPARK-7223] Rename RPC askWithReply -> askWithReply, sendWithReply -> ask. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-29 09:46:37 -0700 |
| Commit: 687273d, github.com/apache/spark/pull/5768 |
| |
| [SPARK-6918] [YARN] Secure HBase support. |
| Dean Chen <deanchen5@gmail.com> |
| 2015-04-29 08:58:33 -0500 |
| Commit: baed3f2, github.com/apache/spark/pull/5586 |
| |
| [SPARK-7076][SPARK-7077][SPARK-7080][SQL] Use managed memory for aggregations |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-29 01:07:26 -0700 |
| Commit: f49284b, github.com/apache/spark/pull/5725 |
| |
| [SPARK-7204] [SQL] Fix callSite for Dataframe and SQL operations |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-29 00:35:08 -0700 |
| Commit: 1fd6ed9, github.com/apache/spark/pull/5757 |
| |
| [SPARK-7188] added python support for math DataFrame functions |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-29 00:09:24 -0700 |
| Commit: fe917f5, github.com/apache/spark/pull/5750 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-28 23:38:59 -0700 |
| Commit: 8dee274, github.com/apache/spark/pull/3205 |
| |
| [SPARK-7205] Support `.ivy2/local` and `.m2/repositories/` in --packages |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-28 23:05:02 -0700 |
| Commit: f98773a, github.com/apache/spark/pull/5755 |
| |
| [SPARK-7215] made coalesce and repartition a part of the query plan |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-28 22:48:04 -0700 |
| Commit: 271c4c6, github.com/apache/spark/pull/5762 |
| |
| [SPARK-6756] [MLLIB] add toSparse, toDense, numActives, numNonzeros, and compressed to Vector |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-28 21:49:53 -0700 |
| Commit: 5ef006f, github.com/apache/spark/pull/5756 |
| |
| [SPARK-7208] [ML] [PYTHON] Added Matrix, SparseMatrix to __all__ list in linalg.py |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-28 21:15:47 -0700 |
| Commit: a8aeadb, github.com/apache/spark/pull/5759 |
| |
| [SPARK-7138] [STREAMING] Add method to BlockGenerator to add multiple records to BlockGenerator with single callback |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-04-28 19:31:57 -0700 |
| Commit: 5c8f4bd, github.com/apache/spark/pull/5695 |
| |
| [SPARK-6965] [MLLIB] StringIndexer handles numeric input. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-28 17:41:09 -0700 |
| Commit: d36e673, github.com/apache/spark/pull/5753 |
| |
| Closes #4807 Closes #5055 Closes #3583 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-28 14:21:25 -0700 |
| Commit: 555213e |
| |
| [SPARK-7201] [MLLIB] move Identifiable to ml.util |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-28 14:07:26 -0700 |
| Commit: f0a1f90, github.com/apache/spark/pull/5749 |
| |
| [MINOR] [CORE] Warn users who try to cache RDDs with dynamic allocation on. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-28 13:49:29 -0700 |
| Commit: 28b1af7, github.com/apache/spark/pull/5751 |
| |
| [SPARK-5338] [MESOS] Add cluster mode support for Mesos |
| Timothy Chen <tnachen@gmail.com>, Luc Bourlier <luc.bourlier@typesafe.com> |
| 2015-04-28 13:31:08 -0700 |
| Commit: 53befac, github.com/apache/spark/pull/5144 |
| |
| [SPARK-6314] [CORE] handle JsonParseException for history server |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-04-28 12:33:48 -0700 |
| Commit: 8009810, github.com/apache/spark/pull/5736 |
| |
| [SPARK-5932] [CORE] Use consistent naming for size properties |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-04-28 12:18:55 -0700 |
| Commit: 2d222fb, github.com/apache/spark/pull/5574 |
| |
| [SPARK-4286] Add an external shuffle service that can be run as a daemon. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-04-28 12:08:18 -0700 |
| Commit: 8aab94d, github.com/apache/spark/pull/4990 |
| |
| [Core][test][minor] replace try finally block with tryWithSafeFinally |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-04-28 10:24:00 -0700 |
| Commit: 52ccf1d, github.com/apache/spark/pull/5739 |
| |
| [SPARK-7140] [MLLIB] only scan the first 16 entries in Vector.hashCode |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-28 09:59:36 -0700 |
| Commit: b14cd23, github.com/apache/spark/pull/5697 |
| |
| [SPARK-5253] [ML] LinearRegression with L1/L2 (ElasticNet) using OWLQN |
| DB Tsai <dbt@netflix.com>, DB Tsai <dbtsai@alpinenow.com> |
| 2015-04-28 09:46:08 -0700 |
| Commit: 6a827d5, github.com/apache/spark/pull/4259 |
| |
| [SPARK-6435] spark-shell --jars option does not add all jars to classpath |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-04-28 07:55:21 -0400 |
| Commit: 268c419, github.com/apache/spark/pull/5227 |
| |
| [SPARK-7100] [MLLIB] Fix persisted RDD leak in GradientBoostTrees |
| Jim Carroll <jim@dontcallme.com> |
| 2015-04-28 07:51:02 -0400 |
| Commit: 75905c5, github.com/apache/spark/pull/5669 |
| |
| [SPARK-7168] [BUILD] Update plugin versions in Maven build and centralize versions |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-28 07:48:34 -0400 |
| Commit: 7f3b3b7, github.com/apache/spark/pull/5720 |
| |
| [SPARK-6352] [SQL] Custom parquet output committer |
| Pei-Lun Lee <pllee@appier.com> |
| 2015-04-28 16:50:18 +0800 |
| Commit: e13cd86, github.com/apache/spark/pull/5525 |
| |
| [SPARK-7135][SQL] DataFrame expression for monotonically increasing IDs. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-28 00:39:08 -0700 |
| Commit: d94cd1a, github.com/apache/spark/pull/5709 |
| |
| [SPARK-7187] SerializationDebugger should not crash user code |
| Andrew Or <andrew@databricks.com> |
| 2015-04-28 00:38:14 -0700 |
| Commit: bf35edd, github.com/apache/spark/pull/5734 |
| |
| [SPARK-5946] [STREAMING] Add Python API for direct Kafka stream |
| jerryshao <saisai.shao@intel.com>, Saisai Shao <saisai.shao@intel.com> |
| 2015-04-27 23:48:02 -0700 |
| Commit: 9e4e82b, github.com/apache/spark/pull/4723 |
| |
| [SPARK-6829] Added math functions for DataFrames |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-04-27 23:10:14 -0700 |
| Commit: 29576e7, github.com/apache/spark/pull/5616 |
| |
| [SPARK-7174][Core] Move calling `TaskScheduler.executorHeartbeatReceived` to another thread |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-27 21:45:40 -0700 |
| Commit: 874a2ca, github.com/apache/spark/pull/5723 |
| |
| [SPARK-7090] [MLLIB] Introduce LDAOptimizer to LDA to further improve extensibility |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-04-27 19:02:51 -0700 |
| Commit: 4d9e560, github.com/apache/spark/pull/5661 |
| |
| [SPARK-7162] [YARN] Launcher error in yarn-client |
| GuoQiang Li <witgo@qq.com> |
| 2015-04-27 19:52:41 -0400 |
| Commit: 62888a4, github.com/apache/spark/pull/5716 |
| |
| [SPARK-7145] [CORE] commons-lang (2.x) classes used instead of commons-lang3 (3.x); commons-io used without dependency |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-27 19:50:55 -0400 |
| Commit: ab5adb7, github.com/apache/spark/pull/5703 |
| |
| [SPARK-3090] [CORE] Stop SparkContext if user forgets to. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-27 19:46:17 -0400 |
| Commit: 5d45e1f, github.com/apache/spark/pull/5696 |
| |
| [SPARK-6738] [CORE] Improve estimate the size of a large array |
| Hong Shen <hongshen@tencent.com> |
| 2015-04-27 18:57:31 -0400 |
| Commit: 8e1c00d, github.com/apache/spark/pull/5608 |
| |
| [SPARK-7103] Fix crash with SparkContext.union when RDD has no partitioner |
| Steven She <steven@canopylabs.com> |
| 2015-04-27 18:55:02 -0400 |
| Commit: b9de9e0, github.com/apache/spark/pull/5679 |
| |
| [SPARK-6991] [SPARKR] Adds support for zipPartitions. |
| hlin09 <hlin09pu@gmail.com> |
| 2015-04-27 15:04:37 -0700 |
| Commit: ca9f4eb, github.com/apache/spark/pull/5568 |
| |
| SPARK-7107 Add parameter for zookeeper.znode.parent to hbase_inputformat... |
| tedyu <yuzhihong@gmail.com> |
| 2015-04-27 14:42:40 -0700 |
| Commit: ef82bdd, github.com/apache/spark/pull/5673 |
| |
| [SPARK-6856] [R] Make RDD information more useful in SparkR |
| Jeff Harrison <jeffrharrison@gmail.com> |
| 2015-04-27 13:38:25 -0700 |
| Commit: 7078f60, github.com/apache/spark/pull/5667 |
| |
| [SPARK-4925] Publish Spark SQL hive-thriftserver maven artifact |
| Misha Chernetsov <chernetsov@gmail.com> |
| 2015-04-27 11:27:56 -0700 |
| Commit: 998aac2, github.com/apache/spark/pull/5429 |
| |
| [SPARK-6505] [SQL] Remove the reflection call in HiveFunctionWrapper |
| baishuo <vc_java@hotmail.com> |
| 2015-04-27 14:08:05 +0800 |
| Commit: 82bb7fd, github.com/apache/spark/pull/5660 |
| |
| [SQL][Minor] rename DataTypeParser.apply to DataTypeParser.parse |
| wangfei <wangfei1@huawei.com> |
| 2015-04-26 21:08:47 -0700 |
| Commit: d188b8b, github.com/apache/spark/pull/5710 |
| |
| [SPARK-7152][SQL] Add a Column expression for partition ID. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-26 11:46:58 -0700 |
| Commit: ca55dc9, github.com/apache/spark/pull/5705 |
| |
| [MINOR] [MLLIB] Refactor toString method in MLLIB |
| Alain <aihe@usc.edu> |
| 2015-04-26 07:14:24 -0400 |
| Commit: 9a5bbe0, github.com/apache/spark/pull/5687 |
| |
| [SPARK-6014] [CORE] [HOTFIX] Add try-catch block around ShutDownHook |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-04-25 20:02:23 -0400 |
| Commit: f5473c2, github.com/apache/spark/pull/5672 |
| |
| [SPARK-7092] Update spark scala version to 2.11.6 |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2015-04-25 18:07:34 -0400 |
| Commit: a11c868, github.com/apache/spark/pull/5662 |
| |
| [SQL] Update SQL readme to include instructions on generating golden answer files based on Hive 0.13.1. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-25 13:43:39 -0700 |
| Commit: aa6966f, github.com/apache/spark/pull/5702 |
| |
| [SPARK-6113] [ML] Tree ensembles for Pipelines API |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-25 12:27:19 -0700 |
| Commit: a7160c4, github.com/apache/spark/pull/5626 |
| |
| Revert "[SPARK-6752][Streaming] Allow StreamingContext to be recreated from checkpoint and existing SparkContext" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-25 10:37:34 -0700 |
| Commit: a61d65f |
| |
| update the deprecated CountMinSketchMonoid function to TopPctCMS function |
| KeheCAI <caikehe@gmail.com> |
| 2015-04-25 08:42:38 -0400 |
| Commit: cca9905, github.com/apache/spark/pull/5629 |
| |
| [SPARK-7136][Docs] Spark SQL and DataFrame Guide fix example file and paths |
| Deborah Siegel <deborah.siegel@gmail.com>, DEBORAH SIEGEL <deborahsiegel@d-140-142-0-49.dhcp4.washington.edu>, DEBORAH SIEGEL <deborahsiegel@DEBORAHs-MacBook-Pro.local>, DEBORAH SIEGEL <deborahsiegel@d-69-91-154-197.dhcp4.washington.edu> |
| 2015-04-24 20:25:07 -0700 |
| Commit: 59b7cfc, github.com/apache/spark/pull/5693 |
| |
| [PySpark][Minor] Update sql example, so that can read file correctly |
| linweizhong <linweizhong@huawei.com> |
| 2015-04-24 20:23:19 -0700 |
| Commit: d874f8b, github.com/apache/spark/pull/5684 |
| |
| [SPARK-6122] [CORE] Upgrade tachyon-client version to 0.6.3 |
| Calvin Jia <jia.calvin@gmail.com> |
| 2015-04-24 17:57:41 -0400 |
| Commit: 438859e, github.com/apache/spark/pull/5354 |
| |
| [SPARK-6852] [SPARKR] Accept numeric as numPartitions in SparkR. |
| Sun Rui <rui.sun@intel.com> |
| 2015-04-24 12:52:07 -0700 |
| Commit: caf0136, github.com/apache/spark/pull/5613 |
| |
| [SPARK-7033] [SPARKR] Clean usage of split. Use partition instead where applicable. |
| Sun Rui <rui.sun@intel.com> |
| 2015-04-24 11:00:19 -0700 |
| Commit: ebb77b2, github.com/apache/spark/pull/5628 |
| |
| [SPARK-6528] [ML] Add IDF transformer |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-04-24 08:29:49 -0700 |
| Commit: 6e57d57, github.com/apache/spark/pull/5266 |
| |
| [SPARK-7115] [MLLIB] skip the very first 1 in poly expansion |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-24 08:27:48 -0700 |
| Commit: 78b39c7, github.com/apache/spark/pull/5681 |
| |
| [SPARK-5894] [ML] Add polynomial mapper |
| Xusen Yin <yinxusen@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-04-24 00:39:29 -0700 |
| Commit: 8509519, github.com/apache/spark/pull/5245 |
| |
| Fixed a typo from the previous commit. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-23 22:39:00 -0700 |
| Commit: 4c722d7 |
| |
| [SQL] Fixed expression data type matching. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-23 21:21:03 -0700 |
| Commit: d3a302d, github.com/apache/spark/pull/5675 |
| |
| Update sql-programming-guide.md |
| Ken Geis <geis.ken@gmail.com> |
| 2015-04-23 20:45:33 -0700 |
| Commit: 67bccbd, github.com/apache/spark/pull/5674 |
| |
| [SPARK-7060][SQL] Add alias function to python dataframe |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-23 18:52:55 -0700 |
| Commit: 2d010f7, github.com/apache/spark/pull/5634 |
| |
| [SPARK-7037] [CORE] Inconsistent behavior for non-spark config properties in spark-shell and spark-submit |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-04-23 20:10:55 -0400 |
| Commit: 336f7f5, github.com/apache/spark/pull/5617 |
| |
| [SPARK-6818] [SPARKR] Support column deletion in SparkR DataFrame API. |
| Sun Rui <rui.sun@intel.com> |
| 2015-04-23 16:08:14 -0700 |
| Commit: 73db132, github.com/apache/spark/pull/5655 |
| |
| [SQL] Break dataTypes.scala into multiple files. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-23 14:48:19 -0700 |
| Commit: 6220d93, github.com/apache/spark/pull/5670 |
| |
| [SPARK-7070] [MLLIB] LDA.setBeta should call setTopicConcentration. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-23 14:46:54 -0700 |
| Commit: 1ed46a6, github.com/apache/spark/pull/5649 |
| |
| [SPARK-7087] [BUILD] Fix path issue change version script |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-04-23 17:23:15 -0400 |
| Commit: 6d0749c, github.com/apache/spark/pull/5656 |
| |
| [SPARK-6879] [HISTORYSERVER] check if app is completed before clean it up |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-04-23 17:20:17 -0400 |
| Commit: baa83a9, github.com/apache/spark/pull/5491 |
| |
| [SPARK-7085][MLlib] Fix miniBatchFraction parameter in train method called with 4 arguments |
| wizz <wizz@wizz-dev01.kawasaki.flab.fujitsu.com> |
| 2015-04-23 14:00:07 -0700 |
| Commit: 3e91cc2, github.com/apache/spark/pull/5658 |
| |
| [SPARK-7058] Include RDD deserialization time in "task deserialization time" metric |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-23 13:19:03 -0700 |
| Commit: 6afde2c, github.com/apache/spark/pull/5635 |
| |
| [SPARK-7055][SQL]Use correct ClassLoader for JDBC Driver in JDBCRDD.getConnector |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-04-23 12:00:23 -0700 |
| Commit: c1213e6, github.com/apache/spark/pull/5633 |
| |
| [SPARK-6752][Streaming] Allow StreamingContext to be recreated from checkpoint and existing SparkContext |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-04-23 11:29:34 -0700 |
| Commit: 534f2a4, github.com/apache/spark/pull/5428 |
| |
| [SPARK-7044] [SQL] Fix the deadlock in script transformation |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-23 10:35:22 -0700 |
| Commit: cc48e63, github.com/apache/spark/pull/5625 |
| |
| [minor][streaming]fixed scala string interpolation error |
| Prabeesh K <prabeesh.k@namshi.com> |
| 2015-04-23 10:33:13 -0700 |
| Commit: 975f53e, github.com/apache/spark/pull/5653 |
| |
| [HOTFIX] [SQL] Fix compilation for scala 2.11. |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2015-04-23 16:45:26 +0530 |
| Commit: a7d65d3, github.com/apache/spark/pull/5652 |
| |
| [SPARK-7069][SQL] Rename NativeType -> AtomicType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-23 01:43:40 -0700 |
| Commit: f60bece, github.com/apache/spark/pull/5651 |
| |
| [SPARK-7068][SQL] Remove PrimitiveType |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 23:55:20 -0700 |
| Commit: 29163c5, github.com/apache/spark/pull/5646 |
| |
| [MLlib] Add support for BooleanType to VectorAssembler. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 23:54:48 -0700 |
| Commit: 2d33323, github.com/apache/spark/pull/5648 |
| |
| [HOTFIX][SQL] Fix broken cached test |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-22 22:18:56 -0700 |
| Commit: d9e70f3, github.com/apache/spark/pull/5640 |
| |
| [SPARK-7046] Remove InputMetrics from BlockResult |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-04-22 21:42:09 -0700 |
| Commit: 03e85b4, github.com/apache/spark/pull/5627 |
| |
| [SPARK-7066][MLlib] VectorAssembler should use NumericType not NativeType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 21:35:42 -0700 |
| Commit: d206860, github.com/apache/spark/pull/5642 |
| |
| [MLlib] UnaryTransformer nullability should not depend on PrimitiveType. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 21:35:12 -0700 |
| Commit: 1b85e08, github.com/apache/spark/pull/5644 |
| |
| Disable flaky test: ReceiverSuite "block generator throttling". |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 21:24:22 -0700 |
| Commit: b69c4f9 |
| |
| [SPARK-6967] [SQL] fix date type convertion in jdbcrdd |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-22 19:14:28 -0700 |
| Commit: 04525c0, github.com/apache/spark/pull/5590 |
| |
| [SPARK-6827] [MLLIB] Wrap FPGrowthModel.freqItemsets and make it consistent with Java API |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-04-22 17:22:26 -0700 |
| Commit: f4f3998, github.com/apache/spark/pull/5614 |
| |
| [SPARK-7059][SQL] Create a DataFrame join API to facilitate equijoin. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 15:26:58 -0700 |
| Commit: baf865d, github.com/apache/spark/pull/5638 |
| |
| [SPARK-7039][SQL]JDBCRDD: Add support on type NVARCHAR |
| szheng79 <szheng.code@gmail.com> |
| 2015-04-22 13:02:55 -0700 |
| Commit: fbe7106, github.com/apache/spark/pull/5618 |
| |
| [SQL] Rename some apply functions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-22 11:18:01 -0700 |
| Commit: cdf0328, github.com/apache/spark/pull/5624 |
| |
| [SPARK-7052][Core] Add ThreadUtils and move thread methods from Utils to ThreadUtils |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-22 11:08:59 -0700 |
| Commit: 33b8562, github.com/apache/spark/pull/5631 |
| |
| [SPARK-6889] [DOCS] CONTRIBUTING.md updates to accompany contribution doc updates |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-21 22:34:31 -0700 |
| Commit: bdc5c16, github.com/apache/spark/pull/5623 |
| |
| [SPARK-6113] [ML] Small cleanups after original tree API PR |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-21 21:44:44 -0700 |
| Commit: 607eff0, github.com/apache/spark/pull/5567 |
| |
| [MINOR] Comment improvements in ExternalSorter. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-21 21:04:04 -0700 |
| Commit: 70f9f8f, github.com/apache/spark/pull/5620 |
| |
| [SPARK-6490][Docs] Add docs for rpc configurations |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-21 18:37:53 -0700 |
| Commit: 3a3f710, github.com/apache/spark/pull/5607 |
| |
| [SPARK-1684] [PROJECT INFRA] Merge script should standardize SPARK-XXX prefix |
| texasmichelle <texasmichelle@gmail.com> |
| 2015-04-21 18:08:29 -0700 |
| Commit: a0761ec, github.com/apache/spark/pull/5149 |
| |
| Closes #5427 |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-21 17:52:52 -0700 |
| Commit: 41ef78a |
| |
| [SPARK-6953] [PySpark] speed up python tests |
| Reynold Xin <rxin@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-04-21 17:49:55 -0700 |
| Commit: 3134c3f, github.com/apache/spark/pull/5605 |
| |
| [SPARK-6014] [core] Revamp Spark shutdown hooks, fix shutdown races. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-21 20:33:57 -0400 |
| Commit: e72c16e, github.com/apache/spark/pull/5560 |
| |
| Avoid warning message about invalid refuse_seconds value in Mesos >=0.21... |
| mweindel <m.weindel@usu-software.de> |
| 2015-04-21 20:19:33 -0400 |
| Commit: b063a61, github.com/apache/spark/pull/5597 |
| |
| [Minor][MLLIB] Fix a minor formatting bug in toString method in Node.scala |
| Alain <aihe@usc.edu> |
| 2015-04-21 16:46:17 -0700 |
| Commit: ae036d0, github.com/apache/spark/pull/5621 |
| |
| [SPARK-7036][MLLIB] ALS.train should support DataFrames in PySpark |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-21 16:44:52 -0700 |
| Commit: 686dd74, github.com/apache/spark/pull/5619 |
| |
| [SPARK-6065] [MLlib] Optimize word2vec.findSynonyms using blas calls |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-21 16:42:45 -0700 |
| Commit: 7fe6142, github.com/apache/spark/pull/5467 |
| |
| [minor] [build] Set java options when generating mima ignores. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-21 16:35:37 -0700 |
| Commit: a70e849, github.com/apache/spark/pull/5615 |
| |
| [SPARK-3386] Share and reuse SerializerInstances in shuffle paths |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-21 16:24:15 -0700 |
| Commit: f83c0f1, github.com/apache/spark/pull/5606 |
| |
| [SPARK-5817] [SQL] Fix bug of udtf with column names |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-21 15:11:15 -0700 |
| Commit: 7662ec2, github.com/apache/spark/pull/4602 |
| |
| [SPARK-6996][SQL] Support map types in java beans |
| Punya Biswal <pbiswal@palantir.com> |
| 2015-04-21 14:50:02 -0700 |
| Commit: 2a24bf9, github.com/apache/spark/pull/5578 |
| |
| [SPARK-6969][SQL] Refresh the cached table when REFRESH TABLE is used |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-21 14:48:42 -0700 |
| Commit: 6265cba, github.com/apache/spark/pull/5583 |
| |
| [SQL][minor] make it more clear that we only need to re-throw GetField exception for UnresolvedAttribute |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-04-21 14:48:02 -0700 |
| Commit: 03fd921, github.com/apache/spark/pull/5588 |
| |
| [SPARK-6994] Allow to fetch field values by name in sql.Row |
| vidmantas zemleris <vidmantas@vinted.com> |
| 2015-04-21 14:47:09 -0700 |
| Commit: 2e8c6ca, github.com/apache/spark/pull/5573 |
| |
| [SPARK-7011] Build(compilation) fails with scala 2.11 option, because a protected[sql] type is accessed in ml package. |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2015-04-21 14:43:46 -0700 |
| Commit: 04bf34e, github.com/apache/spark/pull/5593 |
| |
| [SPARK-6845] [MLlib] [PySpark] Add isTranposed flag to DenseMatrix |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-21 14:36:50 -0700 |
| Commit: 45c47fa, github.com/apache/spark/pull/5455 |
| |
| SPARK-3276 Added a new configuration spark.streaming.minRememberDuration |
| emres <emre.sevinc@gmail.com> |
| 2015-04-21 16:39:56 -0400 |
| Commit: c25ca7c, github.com/apache/spark/pull/5438 |
| |
| [SPARK-5360] [SPARK-6606] Eliminate duplicate objects in serialized CoGroupedRDD |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-04-21 11:01:18 -0700 |
| Commit: c035c0f, github.com/apache/spark/pull/4145 |
| |
| [SPARK-6985][streaming] Receiver maxRate over 1000 causes a StackOverflowError |
| David McGuire <david.mcguire2@nike.com> |
| 2015-04-21 07:21:10 -0400 |
| Commit: 5fea3e5, github.com/apache/spark/pull/5559 |
| |
| [SPARK-5990] [MLLIB] Model import/export for IsotonicRegression |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-04-21 00:14:16 -0700 |
| Commit: 1f2f723, github.com/apache/spark/pull/5270 |
| |
| [SPARK-6949] [SQL] [PySpark] Support Date/Timestamp in Column expression |
| Davies Liu <davies@databricks.com> |
| 2015-04-21 00:08:18 -0700 |
| Commit: ab9128f, github.com/apache/spark/pull/5570 |
| |
| [SPARK-6490][Core] Add spark.rpc.* and deprecate spark.akka.* |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-20 23:18:42 -0700 |
| Commit: 8136810, github.com/apache/spark/pull/5595 |
| |
| [SPARK-6635][SQL] DataFrame.withColumn should replace columns with identical column names |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-20 18:54:01 -0700 |
| Commit: c736220, github.com/apache/spark/pull/5541 |
| |
| [SPARK-6368][SQL] Build a specialized serializer for Exchange operator. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-20 18:42:50 -0700 |
| Commit: ce7ddab, github.com/apache/spark/pull/5497 |
| |
| [doc][streaming] Fixed broken link in mllib section |
| BenFradet <benjamin.fradet@gmail.com> |
| 2015-04-20 13:46:55 -0700 |
| Commit: 517bdf3, github.com/apache/spark/pull/5600 |
| |
| fixed doc |
| Eric Chiang <eric.chiang.m@gmail.com> |
| 2015-04-20 13:11:21 -0700 |
| Commit: 97fda73, github.com/apache/spark/pull/5599 |
| |
| [Minor][MLlib] Incorrect path to test data is used in DecisionTreeExample |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-20 10:47:37 -0700 |
| Commit: 1ebceaa, github.com/apache/spark/pull/5594 |
| |
| [SPARK-6661] Python type errors should print type, not object |
| Elisey Zanko <elisey.zanko@gmail.com> |
| 2015-04-20 10:44:09 -0700 |
| Commit: 7717661, github.com/apache/spark/pull/5361 |
| |
| [SPARK-7003] Improve reliability of connection failure detection between Netty block transfer service endpoints |
| Aaron Davidson <aaron@databricks.com> |
| 2015-04-20 09:54:21 -0700 |
| Commit: 968ad97, github.com/apache/spark/pull/5584 |
| |
| [SPARK-5924] Add the ability to specify withMean or withStd parameters with StandarScaler |
| jrabary <Jaonary@gmail.com> |
| 2015-04-20 09:47:56 -0700 |
| Commit: 1be2070, github.com/apache/spark/pull/4704 |
| |
| [doc][mllib] Fix typo of the page title in Isotonic regression documents |
| dobashim <dobashim@oss.nttdata.co.jp> |
| 2015-04-20 00:03:23 -0400 |
| Commit: 6fe690d, github.com/apache/spark/pull/5581 |
| |
| [SPARK-6979][Streaming] Replace JobScheduler.eventActor and JobGenerator.eventActor with EventLoop |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-19 20:48:36 -0700 |
| Commit: c776ee8, github.com/apache/spark/pull/5554 |
| |
| [SPARK-6983][Streaming] Update ReceiverTrackerActor to use the new Rpc interface |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-19 20:35:43 -0700 |
| Commit: d8e1b7b, github.com/apache/spark/pull/5557 |
| |
| [SPARK-6998][MLlib] Make StreamingKMeans 'Serializable' |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-19 20:33:51 -0700 |
| Commit: fa73da0, github.com/apache/spark/pull/5582 |
| |
| [SPARK-6963][CORE]Flaky test: o.a.s.ContextCleanerSuite automatically cleanup checkpoint |
| GuoQiang Li <witgo@qq.com> |
| 2015-04-19 09:37:09 +0100 |
| Commit: 0424da6, github.com/apache/spark/pull/5548 |
| |
| SPARK-6993 : Add default min, max methods for JavaDoubleRDD |
| Olivier Girardot <o.girardot@lateral-thoughts.com> |
| 2015-04-18 18:21:44 -0700 |
| Commit: 8fbd45c, github.com/apache/spark/pull/5571 |
| |
| Fixed doc |
| Gaurav Nanda <gaurav324@gmail.com> |
| 2015-04-18 17:20:46 -0700 |
| Commit: 729885e, github.com/apache/spark/pull/5576 |
| |
| [SPARK-6219] Reuse pep8.py |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-04-18 16:46:28 -0700 |
| Commit: 28683b4, github.com/apache/spark/pull/5561 |
| |
| [core] [minor] Make sure ConnectionManager stops. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-18 10:14:56 +0100 |
| Commit: 327ebf0, github.com/apache/spark/pull/5566 |
| |
| SPARK-6992 : Fix documentation example for Spark SQL on StructType |
| Olivier Girardot <o.girardot@lateral-thoughts.com> |
| 2015-04-18 00:31:01 -0700 |
| Commit: 5f095d5, github.com/apache/spark/pull/5569 |
| |
| [SPARK-6975][Yarn] Fix argument validation error |
| jerryshao <saisai.shao@intel.com> |
| 2015-04-17 19:17:06 -0700 |
| Commit: d850b4b, github.com/apache/spark/pull/5551 |
| |
| [SPARK-5933] [core] Move config deprecation warnings to SparkConf. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-17 19:02:07 -0700 |
| Commit: 1991337, github.com/apache/spark/pull/5562 |
| |
| [SPARK-6350][Mesos] Make mesosExecutorCores configurable in mesos "fine-grained" mode |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-04-17 18:30:55 -0700 |
| Commit: 6fbeb82, github.com/apache/spark/pull/5063 |
| |
| [SPARK-6703][Core] Provide a way to discover existing SparkContext's |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-04-17 18:28:42 -0700 |
| Commit: c5ed510, github.com/apache/spark/pull/5501 |
| |
| Minor fix to SPARK-6958: Improve Python docstring for DataFrame.sort. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-17 16:30:13 -0500 |
| Commit: a452c59, github.com/apache/spark/pull/5558 |
| |
| SPARK-6988 : Fix documentation regarding DataFrames using the Java API |
| Olivier Girardot <o.girardot@lateral-thoughts.com> |
| 2015-04-17 16:23:10 -0500 |
| Commit: d305e68, github.com/apache/spark/pull/5564 |
| |
| [SPARK-6807] [SparkR] Merge recent SparkR-pkg changes |
| cafreeman <cfreeman@alteryx.com>, Davies Liu <davies@databricks.com>, Zongheng Yang <zongheng.y@gmail.com>, Shivaram Venkataraman <shivaram.venkataraman@gmail.com>, Shivaram Venkataraman <shivaram@cs.berkeley.edu>, Sun Rui <rui.sun@intel.com> |
| 2015-04-17 13:42:19 -0700 |
| Commit: 59e206d, github.com/apache/spark/pull/5436 |
| |
| [SPARK-6113] [ml] Stabilize DecisionTree API |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-17 13:15:36 -0700 |
| Commit: a83571a, github.com/apache/spark/pull/5530 |
| |
| [SPARK-2669] [yarn] Distribute client configuration to AM. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-17 14:21:51 -0500 |
| Commit: 50ab8a6, github.com/apache/spark/pull/4142 |
| |
| [SPARK-6957] [SPARK-6958] [SQL] improve API compatibility to pandas |
| Davies Liu <davies@databricks.com> |
| 2015-04-17 11:29:27 -0500 |
| Commit: c84d916, github.com/apache/spark/pull/5544 |
| |
| [SPARK-6604][PySpark]Specify ip of python server scoket |
| linweizhong <linweizhong@huawei.com> |
| 2015-04-17 12:04:02 +0100 |
| Commit: dc48ba9, github.com/apache/spark/pull/5256 |
| |
| [SPARK-6952] Handle long args when detecting PID reuse |
| Punya Biswal <pbiswal@palantir.com> |
| 2015-04-17 11:08:37 +0100 |
| Commit: f6a9a57, github.com/apache/spark/pull/5535 |
| |
| [SPARK-6046] [core] Reorganize deprecated config support in SparkConf. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-17 11:06:01 +0100 |
| Commit: 4527761, github.com/apache/spark/pull/5514 |
| |
| SPARK-6846 [WEBUI] Stage kill URL easy to accidentally trigger and possibility for security issue |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-17 11:02:31 +0100 |
| Commit: f7a2564, github.com/apache/spark/pull/5528 |
| |
| [SPARK-6972][SQL] Add Coalesce to DataFrame |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-16 21:49:26 -0500 |
| Commit: 8220d52, github.com/apache/spark/pull/5545 |
| |
| [SPARK-6966][SQL] Use correct ClassLoader for JDBC Driver |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-16 17:59:49 -0700 |
| Commit: e5949c2, github.com/apache/spark/pull/5543 |
| |
| [SPARK-6899][SQL] Fix type mismatch when using codegen with Average on DecimalType |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-16 17:50:20 -0700 |
| Commit: 1e43851, github.com/apache/spark/pull/5517 |
| |
| [SQL][Minor] Fix foreachUp of treenode |
| scwf <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-04-16 17:35:51 -0700 |
| Commit: d966086, github.com/apache/spark/pull/5518 |
| |
| [SPARK-6911] [SQL] improve accessor for nested types |
| Davies Liu <davies@databricks.com> |
| 2015-04-16 17:33:57 -0700 |
| Commit: 6183b5e, github.com/apache/spark/pull/5513 |
| |
| SPARK-6927 [SQL] Sorting Error when codegen on |
| äŗ峤 <chensong.cs@alibaba-inc.com> |
| 2015-04-16 17:32:42 -0700 |
| Commit: 5fe4343, github.com/apache/spark/pull/5524 |
| |
| [SPARK-4897] [PySpark] Python 3 support |
| Davies Liu <davies@databricks.com>, twneale <twneale@gmail.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-04-16 16:20:57 -0700 |
| Commit: 04e44b3, github.com/apache/spark/pull/5173 |
| |
| [SPARK-6855] [SPARKR] Set R includes to get the right collate order. |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu> |
| 2015-04-16 13:06:34 -0700 |
| Commit: 55f553a, github.com/apache/spark/pull/5462 |
| |
| [SPARK-6934][Core] Use 'spark.akka.askTimeout' for the ask timeout |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-16 13:45:55 -0500 |
| Commit: ef3fb80, github.com/apache/spark/pull/5529 |
| |
| [SPARK-6694][SQL]SparkSQL CLI must be able to specify an option --database on the command line. |
| Jin Adachi <adachij2002@yahoo.co.jp>, adachij <adachij@nttdata.co.jp> |
| 2015-04-16 23:41:04 +0800 |
| Commit: 3ae37b9, github.com/apache/spark/pull/5345 |
| |
| [SPARK-4194] [core] Make SparkContext initialization exception-safe. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-16 10:48:31 +0100 |
| Commit: de4fa6b, github.com/apache/spark/pull/5335 |
| |
| SPARK-4783 [CORE] System.exit() calls in SparkContext disrupt applications embedding Spark |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-16 10:45:32 +0100 |
| Commit: 6179a94, github.com/apache/spark/pull/5492 |
| |
| [Streaming][minor] Remove additional quote and unneeded imports |
| jerryshao <saisai.shao@intel.com> |
| 2015-04-16 10:39:02 +0100 |
| Commit: 8370550, github.com/apache/spark/pull/5540 |
| |
| [SPARK-6893][ML] default pipeline parameter handling in python |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-15 23:49:42 -0700 |
| Commit: 57cd1e8, github.com/apache/spark/pull/5534 |
| |
| SPARK-6938: All require statements now have an informative error message. |
| Juliet Hougland <juliet@cloudera.com> |
| 2015-04-15 21:52:25 -0700 |
| Commit: 52c3439, github.com/apache/spark/pull/5532 |
| |
| [SPARK-5277][SQL] - SparkSqlSerializer doesn't always register user specified KryoRegistrators |
| Max Seiden <max@platfora.com> |
| 2015-04-15 16:15:11 -0700 |
| Commit: 8a53de1, github.com/apache/spark/pull/5237 |
| |
| [SPARK-2312] Logging Unhandled messages |
| Isaias Barroso <isaias.barroso@gmail.com> |
| 2015-04-15 22:40:52 +0100 |
| Commit: d5f1b96, github.com/apache/spark/pull/2055 |
| |
| [SPARK-2213] [SQL] sort merge join for spark sql |
| Daoyuan Wang <daoyuan.wang@intel.com>, Michael Armbrust <michael@databricks.com> |
| 2015-04-15 14:06:10 -0700 |
| Commit: 585638e, github.com/apache/spark/pull/5208 |
| |
| [SPARK-6898][SQL] completely support special chars in column names |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-04-15 13:39:12 -0700 |
| Commit: 4754e16, github.com/apache/spark/pull/5511 |
| |
| [SPARK-6937][MLLIB] Fixed bug in PICExample in which the radius were not being accepted on c... |
| sboeschhuawei <stephen.boesch@huawei.com> |
| 2015-04-15 13:28:10 -0700 |
| Commit: 557a797, github.com/apache/spark/pull/5531 |
| |
| [SPARK-6844][SQL] Clean up accumulators used in InMemoryRelation when it is uncached |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-15 13:15:58 -0700 |
| Commit: cf38fe0, github.com/apache/spark/pull/5475 |
| |
| [SPARK-6638] [SQL] Improve performance of StringType in SQL |
| Davies Liu <davies@databricks.com> |
| 2015-04-15 13:06:38 -0700 |
| Commit: 8584276, github.com/apache/spark/pull/5350 |
| |
| [SPARK-6887][SQL] ColumnBuilder misses FloatType |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-15 13:04:03 -0700 |
| Commit: 785f955, github.com/apache/spark/pull/5499 |
| |
| [SPARK-6800][SQL] Update doc for JDBCRelation's columnPartition |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-15 13:01:29 -0700 |
| Commit: e3e4e9a, github.com/apache/spark/pull/5488 |
| |
| [SPARK-6730][SQL] Allow using keyword as identifier in OPTIONS |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-15 13:00:19 -0700 |
| Commit: b75b307, github.com/apache/spark/pull/5520 |
| |
| [SPARK-6886] [PySpark] fix big closure with shuffle |
| Davies Liu <davies@databricks.com> |
| 2015-04-15 12:58:02 -0700 |
| Commit: f11288d, github.com/apache/spark/pull/5496 |
| |
| SPARK-6861 [BUILD] Scalastyle config prevents building Maven child modules alone |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-15 15:17:58 +0100 |
| Commit: 6c5ed8a, github.com/apache/spark/pull/5471 |
| |
| [HOTFIX] [SPARK-6896] [SQL] fix compile error in hive-thriftserver |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-15 10:23:53 +0100 |
| Commit: 29aabdd, github.com/apache/spark/pull/5507 |
| |
| [SPARK-6871][SQL] WITH clause in CTE can not following another WITH clause |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-14 23:47:16 -0700 |
| Commit: 6be9189, github.com/apache/spark/pull/5480 |
| |
| [SPARK-5634] [core] Show correct message in HS when no incomplete apps f... |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-14 18:52:48 -0700 |
| Commit: 30a6e0d, github.com/apache/spark/pull/5515 |
| |
| [SPARK-6890] [core] Fix launcher lib work with SPARK_PREPEND_CLASSES. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-14 18:51:39 -0700 |
| Commit: 9717389, github.com/apache/spark/pull/5504 |
| |
| [SPARK-6796][Streaming][WebUI] Add "Active Batches" and "Completed Batches" lists to StreamingPage |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-14 16:51:36 -0700 |
| Commit: 6de282e, github.com/apache/spark/pull/5434 |
| |
| Revert "[SPARK-6352] [SQL] Add DirectParquetOutputCommitter" |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-14 14:07:25 -0700 |
| Commit: a76b921 |
| |
| [SPARK-6769][YARN][TEST] Usage of the ListenerBus in YarnClusterSuite is wrong |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-04-14 14:00:49 -0700 |
| Commit: 4d4b249, github.com/apache/spark/pull/5417 |
| |
| [SPARK-5808] [build] Package pyspark files in sbt assembly. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-14 13:41:38 -0700 |
| Commit: 6577437, github.com/apache/spark/pull/5461 |
| |
| [SPARK-6905] Upgrade to snappy-java 1.1.1.7 |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-14 13:40:07 -0700 |
| Commit: 6adb8bc, github.com/apache/spark/pull/5512 |
| |
| [SPARK-6700] [yarn] Re-enable flaky test. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-14 13:34:44 -0700 |
| Commit: b075e4b, github.com/apache/spark/pull/5459 |
| |
| SPARK-1706: Allow multiple executors per worker in Standalone mode |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-04-14 13:32:06 -0700 |
| Commit: 8f8dc45, github.com/apache/spark/pull/731 |
| |
| [SPARK-2033] Automatically cleanup checkpoint |
| GuoQiang Li <witgo@qq.com> |
| 2015-04-14 12:56:47 -0700 |
| Commit: 25998e4, github.com/apache/spark/pull/855 |
| |
| [CORE] SPARK-6880: Fixed null check when all the dependent stages are cancelled due to previous stage failure |
| pankaj arora <pankaj.arora@guavus.com> |
| 2015-04-14 12:06:46 -0700 |
| Commit: dcf8a9f, github.com/apache/spark/pull/5494 |
| |
| [SPARK-6894]spark.executor.extraLibraryOptions => spark.executor.extraLibraryPath |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-04-14 12:02:11 -0700 |
| Commit: f63b44a, github.com/apache/spark/pull/5506 |
| |
| [SPARK-6081] Support fetching http/https uris in driver runner. |
| Timothy Chen <tnachen@gmail.com> |
| 2015-04-14 11:48:12 -0700 |
| Commit: 320bca4, github.com/apache/spark/pull/4832 |
| |
| SPARK-6878 [CORE] Fix for sum on empty RDD fails with exception |
| Erik van Oosten <evanoosten@ebay.com> |
| 2015-04-14 12:39:56 +0100 |
| Commit: 51b306b, github.com/apache/spark/pull/5489 |
| |
| [SPARK-6731] Bump version of apache commons-math3 |
| Punyashloka Biswal <punya.biswal@gmail.com> |
| 2015-04-14 11:43:06 +0100 |
| Commit: 628a72f, github.com/apache/spark/pull/5380 |
| |
| [WIP][HOTFIX][SPARK-4123]: Fix bug in PR dependency (all deps. removed issue) |
| Brennon York <brennon.york@capitalone.com> |
| 2015-04-13 22:31:44 -0700 |
| Commit: 77eeb10, github.com/apache/spark/pull/5443 |
| |
| [SPARK-5957][ML] better handling of parameters |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-13 21:18:05 -0700 |
| Commit: 971b95b, github.com/apache/spark/pull/5431 |
| |
| [Minor][SparkR] Minor refactor and removes redundancy related to cleanClosure. |
| hlin09 <hlin09pu@gmail.com> |
| 2015-04-13 20:43:24 -0700 |
| Commit: 0ba3fdd, github.com/apache/spark/pull/5495 |
| |
| [SPARK-5794] [SQL] fix add jar |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-13 18:26:00 -0700 |
| Commit: b45059d, github.com/apache/spark/pull/4586 |
| |
| [SQL] [Minor] Fix for SqlApp.scala |
| Fei Wang <wangfei1@huawei.com> |
| 2015-04-13 18:23:35 -0700 |
| Commit: 3782e1f, github.com/apache/spark/pull/5485 |
| |
| [Spark-4848] Allow different Worker configurations in standalone cluster |
| Nathan Kronenfeld <nkronenfeld@oculusinfo.com> |
| 2015-04-13 18:21:16 -0700 |
| Commit: 435b877, github.com/apache/spark/pull/5140 |
| |
| [SPARK-6877][SQL] Add code generation support for Min |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-13 18:16:33 -0700 |
| Commit: 4898dfa, github.com/apache/spark/pull/5487 |
| |
| [SPARK-6303][SQL] Remove unnecessary Average in GeneratedAggregate |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-13 18:15:29 -0700 |
| Commit: 5b8b324, github.com/apache/spark/pull/4996 |
| |
| [SPARK-6881][SparkR] Changes the checkpoint directory name. |
| hlin09 <hlin09pu@gmail.com> |
| 2015-04-13 16:53:50 -0700 |
| Commit: d7f2c19, github.com/apache/spark/pull/5493 |
| |
| [SPARK-5931][CORE] Use consistent naming for time properties |
| Ilya Ganelin <ilya.ganelin@capitalone.com>, Ilya Ganelin <ilganeli@gmail.com> |
| 2015-04-13 16:28:07 -0700 |
| Commit: c4ab255, github.com/apache/spark/pull/5236 |
| |
| [SPARK-5941] [SQL] Unit Test loads the table `src` twice for leftsemijoin.q |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-13 16:02:18 -0700 |
| Commit: c5602bd, github.com/apache/spark/pull/4506 |
| |
| [SPARK-6872] [SQL] add copy in external sort |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-13 16:00:58 -0700 |
| Commit: e63a86a, github.com/apache/spark/pull/5481 |
| |
| [SPARK-5972] [MLlib] Cache residuals and gradient in GBT during training and validation |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-13 15:36:33 -0700 |
| Commit: 2a55cb4, github.com/apache/spark/pull/5330 |
| |
| [SQL][SPARK-6742]: Don't push down predicates which reference partition column(s) |
| Yash Datta <Yash.Datta@guavus.com> |
| 2015-04-13 14:43:07 -0700 |
| Commit: 3a205bb, github.com/apache/spark/pull/5390 |
| |
| [SPARK-6130] [SQL] support if not exists for insert overwrite into partition in hiveQl |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-13 14:29:07 -0700 |
| Commit: 85ee0ca, github.com/apache/spark/pull/4865 |
| |
| [SPARK-5988][MLlib] add save/load for PowerIterationClusteringModel |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-04-13 11:53:17 -0700 |
| Commit: 1e340c3, github.com/apache/spark/pull/5450 |
| |
| [SPARK-6662][YARN] Allow variable substitution in spark.yarn.historyServer.address |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-04-13 13:45:10 -0500 |
| Commit: 6cc5b3e, github.com/apache/spark/pull/5321 |
| |
| [SPARK-6765] Enable scalastyle on test code. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-13 09:29:04 -0700 |
| Commit: c5b0b29, github.com/apache/spark/pull/5486 |
| |
| [SPARK-6207] [YARN] [SQL] Adds delegation tokens for metastore to conf. |
| Doug Balog <doug.balogtarget.com>, Doug Balog <doug.balog@target.com> |
| 2015-04-13 09:49:58 -0500 |
| Commit: 77620be, github.com/apache/spark/pull/5031 |
| |
| [SPARK-6352] [SQL] Add DirectParquetOutputCommitter |
| Pei-Lun Lee <pllee@appier.com> |
| 2015-04-13 21:52:00 +0800 |
| Commit: b29663e, github.com/apache/spark/pull/5042 |
| |
| [SPARK-6870][Yarn] Catch InterruptedException when yarn application state monitor thread been interrupted |
| linweizhong <linweizhong@huawei.com> |
| 2015-04-13 13:06:54 +0100 |
| Commit: 202ebf0, github.com/apache/spark/pull/5479 |
| |
| [SPARK-6671] Add status command for spark daemons |
| Pradeep Chanumolu <pchanumolu@maprtech.com> |
| 2015-04-13 13:02:55 +0100 |
| Commit: 240ea03, github.com/apache/spark/pull/5327 |
| |
| [SPARK-6440][CORE]Handle IPv6 addresses properly when constructing URI |
| nyaapa <nyaapa@gmail.com> |
| 2015-04-13 12:55:25 +0100 |
| Commit: 9d117ce, github.com/apache/spark/pull/5424 |
| |
| [SPARK-6860][Streaming][WebUI] Fix the possible inconsistency of StreamingPage |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-13 12:21:29 +0100 |
| Commit: 14ce3ea, github.com/apache/spark/pull/5470 |
| |
| [SPARK-6762]Fix potential resource leaks in CheckPoint CheckpointWriter and CheckpointReader |
| lisurprise <zhichao.li@intel.com> |
| 2015-04-13 12:18:05 +0100 |
| Commit: cadd7d7, github.com/apache/spark/pull/5407 |
| |
| [SPARK-6868][YARN] Fix broken container log link on executor page when HTTPS_ONLY. |
| Dean Chen <deanchen5@gmail.com> |
| 2015-04-13 12:08:55 +0100 |
| Commit: 950645d, github.com/apache/spark/pull/5477 |
| |
| [SPARK-6562][SQL] DataFrame.replace |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-12 22:56:12 -0700 |
| Commit: 68d1faa, github.com/apache/spark/pull/5282 |
| |
| [SPARK-5885][MLLIB] Add VectorAssembler as a feature transformer |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-12 22:42:01 -0700 |
| Commit: 9294044, github.com/apache/spark/pull/5196 |
| |
| [SPARK-5886][ML] Add StringIndexer as a feature transformer |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-12 22:41:05 -0700 |
| Commit: 685ddcf, github.com/apache/spark/pull/4735 |
| |
| [SPARK-4081] [mllib] VectorIndexer |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-12 22:38:27 -0700 |
| Commit: d3792f5, github.com/apache/spark/pull/3000 |
| |
| [SPARK-6643][MLLIB] Implement StandardScalerModel missing methods |
| lewuathe <lewuathe@me.com> |
| 2015-04-12 22:17:16 -0700 |
| Commit: fc17661, github.com/apache/spark/pull/5310 |
| |
| [SPARK-6765] Fix test code style for core. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-12 20:50:49 -0700 |
| Commit: a1fe59d, github.com/apache/spark/pull/5484 |
| |
| [MINOR] a typo: coalesce |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-04-12 18:58:53 +0100 |
| Commit: 04bcd67, github.com/apache/spark/pull/5482 |
| |
| [SPARK-6431][Streaming][Kafka] Error message for partition metadata requ... |
| cody koeninger <cody@koeninger.org> |
| 2015-04-12 17:37:30 +0100 |
| Commit: 6ac8eea, github.com/apache/spark/pull/5454 |
| |
| [SPARK-6843][core]Add volatile for the "state" |
| lisurprise <zhichao.li@intel.com> |
| 2015-04-12 13:41:44 +0100 |
| Commit: ddc1743, github.com/apache/spark/pull/5448 |
| |
| [SPARK-6866][Build] Remove duplicated dependency in launcher/pom.xml |
| Guancheng (G.C.) Chen <chenguancheng@gmail.com> |
| 2015-04-12 11:36:41 +0100 |
| Commit: e9445b1, github.com/apache/spark/pull/5476 |
| |
| [SPARK-6677] [SQL] [PySpark] fix cached classes |
| Davies Liu <davies@databricks.com> |
| 2015-04-11 22:33:23 -0700 |
| Commit: 5d8f7b9, github.com/apache/spark/pull/5445 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-11 22:12:56 -0700 |
| Commit: 0cc8fcb, github.com/apache/spark/pull/4994 |
| |
| SPARK-6710 GraphX Fixed Wrong initial bias in GraphX SVDPlusPlus |
| Michael Malak <michaelmalak@yahoo.com> |
| 2015-04-11 21:01:23 -0700 |
| Commit: 1205f7e, github.com/apache/spark/pull/5464 |
| |
| [HOTFIX] Add explicit return types to fix lint errors |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-11 20:12:40 -0700 |
| Commit: dea5dac |
| |
| [SQL][minor] move `resolveGetField` into a object |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-04-11 19:35:56 -0700 |
| Commit: 5c2844c, github.com/apache/spark/pull/5435 |
| |
| [SPARK-6367][SQL] Use the proper data type for those expressions that are hijacking existing data types. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-11 19:26:15 -0700 |
| Commit: 6d4e854, github.com/apache/spark/pull/5094 |
| |
| [SQL] Handle special characters in the authority of a Path's URI. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-11 18:44:54 -0700 |
| Commit: d2383fb, github.com/apache/spark/pull/5381 |
| |
| [SPARK-6379][SQL] Support a functon to call user-defined functions registered in SQLContext |
| Takeshi YAMAMURO <linguin.m.s@gmail.com> |
| 2015-04-11 18:41:12 -0700 |
| Commit: 352a5da, github.com/apache/spark/pull/5061 |
| |
| [SPARK-6179][SQL] Add token for "SHOW PRINCIPALS role_name" and "SHOW TRANSACTIONS" and "SHOW COMPACTIONS" |
| DoingDone9 <799203320@qq.com>, Zhongshuai Pei <799203320@qq.com>, Xu Tingjun <xutingjun@huawei.com> |
| 2015-04-11 18:34:17 -0700 |
| Commit: 48cc840, github.com/apache/spark/pull/4902 |
| |
| [Spark-5068][SQL]Fix bug query data when path doesn't exist for HiveContext |
| lazymam500 <lazyman500@gmail.com>, lazyman <lazyman500@gmail.com> |
| 2015-04-11 18:33:14 -0700 |
| Commit: 1f39a61, github.com/apache/spark/pull/5059 |
| |
| [SPARK-6199] [SQL] Support CTE in HiveContext and SQLContext |
| haiyang <huhaiyang@huawei.com> |
| 2015-04-11 18:30:17 -0700 |
| Commit: 2f53588, github.com/apache/spark/pull/4929 |
| |
| [Minor][SQL] Fix typo in sql |
| Guancheng (G.C.) Chen <chenguancheng@gmail.com> |
| 2015-04-11 15:43:12 -0700 |
| Commit: 7dbd371, github.com/apache/spark/pull/5474 |
| |
| [SPARK-6863] Fix formatting on SQL programming guide. |
| Santiago M. Mola <santiago.mola@sap.com> |
| 2015-04-11 15:42:03 -0700 |
| Commit: 6437e7c, github.com/apache/spark/pull/5472 |
| |
| [SPARK-6611][SQL] Add support for INTEGER as synonym of INT. |
| Santiago M. Mola <santiago.mola@sap.com> |
| 2015-04-11 14:52:49 -0700 |
| Commit: 5f7b7cd, github.com/apache/spark/pull/5271 |
| |
| [SPARK-6858][SQL] Register Java HashMap for SparkSqlSerializer |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-11 14:50:50 -0700 |
| Commit: 198cf2a, github.com/apache/spark/pull/5465 |
| |
| [SPARK-6835] [SQL] Fix bug of Hive UDTF in Lateral View (ClassNotFound) |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-11 22:11:03 +0800 |
| Commit: 3ceb810, github.com/apache/spark/pull/5444 |
| |
| [hotfix] [build] Make sure JAVA_HOME is set for tests. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-11 13:10:01 +0100 |
| Commit: 694aef0, github.com/apache/spark/pull/5441 |
| |
| [Minor][Core] Fix typo |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-11 13:07:41 +0100 |
| Commit: 95a0759, github.com/apache/spark/pull/5466 |
| |
| [SQL] [SPARK-6620] Speed up toDF() and rdd() functions by constructing converters in ScalaReflection |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-04-10 16:27:56 -0700 |
| Commit: 67d0688, github.com/apache/spark/pull/5279 |
| |
| [SPARK-6851][SQL] Create new instance for each converted parquet relation |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-10 16:05:14 -0700 |
| Commit: 23d5f88, github.com/apache/spark/pull/5458 |
| |
| [SPARK-6850] [SparkR] use one partition when we need to compare the whole result |
| Davies Liu <davies@databricks.com> |
| 2015-04-10 15:35:45 -0700 |
| Commit: 68ecdb7, github.com/apache/spark/pull/5460 |
| |
| [SPARK-6216] [PySpark] check the python version in worker |
| Davies Liu <davies@databricks.com> |
| 2015-04-10 14:04:53 -0700 |
| Commit: 4740d6a, github.com/apache/spark/pull/5404 |
| |
| [SPARK-5969][PySpark] Fix descending pyspark.rdd.sortByKey. |
| Milan Straka <fox@ucw.cz> |
| 2015-04-10 13:50:32 -0700 |
| Commit: 0375134, github.com/apache/spark/pull/4761 |
| |
| [SQL] [SPARK-6794] Use kryo-based SparkSqlSerializer for GeneralHashedRelation |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-04-10 12:09:54 -0700 |
| Commit: b9baa4c, github.com/apache/spark/pull/5433 |
| |
| [SPARK-6773][Tests]Fix RAT checks still passed issue when download rat jar failed |
| June.He <jun.hejun@huawei.com> |
| 2015-04-10 20:02:35 +0100 |
| Commit: 9f5ed99, github.com/apache/spark/pull/5421 |
| |
| [SPARK-6766][Streaming] Fix issue about StreamingListenerBatchSubmitted and StreamingListenerBatchStarted |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-10 01:51:42 -0700 |
| Commit: 18ca089, github.com/apache/spark/pull/5414 |
| |
| [SPARK-6211][Streaming] Add Python Kafka API unit test |
| jerryshao <saisai.shao@intel.com>, Saisai Shao <saisai.shao@intel.com> |
| 2015-04-09 23:14:24 -0700 |
| Commit: 3290d2d, github.com/apache/spark/pull/4961 |
| |
| [SPARK-6577] [MLlib] [PySpark] SparseMatrix should be supported in PySpark |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-09 23:10:13 -0700 |
| Commit: e236081, github.com/apache/spark/pull/5355 |
| |
| [SPARK-3074] [PySpark] support groupByKey() with single huge key |
| Davies Liu <davies.liu@gmail.com>, Davies Liu <davies@databricks.com> |
| 2015-04-09 17:07:23 -0700 |
| Commit: b5c51c8, github.com/apache/spark/pull/1977 |
| |
| [Spark-6693][MLlib]add tostring with max lines and width for matrix |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-04-09 15:37:45 -0700 |
| Commit: 9c67049, github.com/apache/spark/pull/5344 |
| |
| [SPARK-6264] [MLLIB] Support FPGrowth algorithm in Python API |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-04-09 15:10:10 -0700 |
| Commit: a0411ae, github.com/apache/spark/pull/5213 |
| |
| [SPARK-6758]block the right jetty package in log |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-04-09 17:44:08 -0400 |
| Commit: 7d92db3, github.com/apache/spark/pull/5406 |
| |
| [minor] [examples] Avoid packaging duplicate classes. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-09 07:07:50 -0400 |
| Commit: 470d745, github.com/apache/spark/pull/5379 |
| |
| SPARK-4924 addendum. Minor assembly directory fix in load-spark-env-sh |
| raschild <raschild@users.noreply.github.com> |
| 2015-04-09 07:04:18 -0400 |
| Commit: 53f6bb1, github.com/apache/spark/pull/5261 |
| |
| [SPARK-6343] Doc driver-worker network reqs |
| Peter Parente <pparent@us.ibm.com> |
| 2015-04-09 06:37:20 -0400 |
| Commit: b9c51c0, github.com/apache/spark/pull/5382 |
| |
| [SPARK-5654] Integrate SparkR |
| Shivaram Venkataraman <shivaram@cs.berkeley.edu>, Shivaram Venkataraman <shivaram.venkataraman@gmail.com>, Zongheng Yang <zongheng.y@gmail.com>, cafreeman <cfreeman@alteryx.com>, Shivaram Venkataraman <shivaram@eecs.berkeley.edu>, Davies Liu <davies@databricks.com>, Davies Liu <davies.liu@gmail.com>, hlin09 <hlin09pu@gmail.com>, Sun Rui <rui.sun@intel.com>, lythesia <iranaikimi@gmail.com>, oscaroboto <oscarjr@gmail.com>, Antonio Piccolboni <antonio@piccolboni.info>, root <edward>, edwardt <edwardt.tril@gmail.com>, hqzizania <qian.huang@intel.com>, dputler <dan.putler@gmail.com>, Todd Gao <todd.gao.2013@gmail.com>, Chris Freeman <cfreeman@alteryx.com>, Felix Cheung <fcheung@AVVOMAC-119.local>, Hossein <hossein@databricks.com>, Evert Lammerts <evert@apache.org>, Felix Cheung <fcheung@avvomac-119.t-mobile.com>, felixcheung <felixcheung_m@hotmail.com>, Ryan Hafen <rhafen@gmail.com>, Ashutosh Raina <ashutoshraina@users.noreply.github.com>, Oscar Olmedo <oscarjr@gmail.com>, Josh Rosen <rosenville@gmail.com>, Yi Lu <iranaikimi@gmail.com>, Harihar Nahak <hnahak87@users.noreply.github.com> |
| 2015-04-08 22:45:40 -0700 |
| Commit: 2fe0a1a, github.com/apache/spark/pull/5096 |
| |
| [SPARK-6765] Fix test code style for SQL |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-08 20:35:29 -0700 |
| Commit: 1b2aab8, github.com/apache/spark/pull/5412 |
| |
| [SPARK-6696] [SQL] Adds HiveContext.refreshTable to PySpark |
| Cheng Lian <lian@databricks.com> |
| 2015-04-08 18:47:39 -0700 |
| Commit: 891ada5, github.com/apache/spark/pull/5349 |
| |
| [SPARK-6451][SQL] supported code generation for CombineSum |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-04-08 18:42:34 -0700 |
| Commit: 7d7384c, github.com/apache/spark/pull/5138 |
| |
| [SQL][minor] remove duplicated resolveGetField and update comment |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-04-08 13:57:01 -0700 |
| Commit: 9418280, github.com/apache/spark/pull/5304 |
| |
| [SPARK-4346][SPARK-3596][YARN] Commonize the monitor logic |
| unknown <l00251599@HGHY1L002515991.china.huawei.com>, Sephiroth-Lin <linwzhong@gmail.com> |
| 2015-04-08 13:56:42 -0700 |
| Commit: 55a92ef, github.com/apache/spark/pull/5305 |
| |
| [SPARK-5242]: Add --private-ips flag to EC2 script |
| Michelangelo D'Agostino <mdagostino@civisanalytics.com> |
| 2015-04-08 16:48:45 -0400 |
| Commit: 86403f5, github.com/apache/spark/pull/5244 |
| |
| [SPARK-6767][SQL] Fixed Query DSL error in spark sql Readme |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-04-08 13:42:29 -0700 |
| Commit: 2f482d7, github.com/apache/spark/pull/5415 |
| |
| [SPARK-6781] [SQL] use sqlContext in python shell |
| Davies Liu <davies@databricks.com> |
| 2015-04-08 13:31:45 -0700 |
| Commit: 6ada4f6, github.com/apache/spark/pull/5425 |
| |
| [SPARK-6765] Fix test code style for mllib. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-08 11:32:44 -0700 |
| Commit: 66159c3, github.com/apache/spark/pull/5411 |
| |
| [SPARK-6765] Fix test code style for graphx. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-08 11:31:48 -0700 |
| Commit: 8d812f9, github.com/apache/spark/pull/5410 |
| |
| [SPARK-6753] Clone SparkConf in ShuffleSuite tests |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-04-08 10:26:45 -0700 |
| Commit: 9d44ddc, github.com/apache/spark/pull/5401 |
| |
| [SPARK-6506] [pyspark] Do not try to retrieve SPARK_HOME when not needed... |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-08 10:14:52 -0700 |
| Commit: f7e21dd, github.com/apache/spark/pull/5405 |
| |
| [SPARK-6765] Fix test code style for streaming. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-08 00:24:59 -0700 |
| Commit: 15e0d2b, github.com/apache/spark/pull/5409 |
| |
| [SPARK-6754] Remove unnecessary TaskContextHelper |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-04-07 22:40:42 -0700 |
| Commit: 8d2a36c, github.com/apache/spark/pull/5402 |
| |
| [SPARK-6705][MLLIB] Add fit intercept api to ml logisticregression |
| Omede Firouz <ofirouz@palantir.com> |
| 2015-04-07 23:36:31 -0400 |
| Commit: d138aa8, github.com/apache/spark/pull/5301 |
| |
| [SPARK-6737] Fix memory leak in OutputCommitCoordinator |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-07 16:18:55 -0700 |
| Commit: c83e039, github.com/apache/spark/pull/5397 |
| |
| [SPARK-6748] [SQL] Makes QueryPlan.schema a lazy val |
| Cheng Lian <lian@databricks.com> |
| 2015-04-08 07:00:56 +0800 |
| Commit: 77bcceb, github.com/apache/spark/pull/5398 |
| |
| [SPARK-6720][MLLIB] PySpark MultivariateStatisticalSummary unit test for normL1... |
| lewuathe <lewuathe@me.com> |
| 2015-04-07 14:36:57 -0700 |
| Commit: fc957dc, github.com/apache/spark/pull/5374 |
| |
| Revert "[SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has space in its path" |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-07 14:34:15 -0700 |
| Commit: e6f08fb |
| |
| [SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has space in its path |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-04-07 14:29:53 -0700 |
| Commit: 596ba77, github.com/apache/spark/pull/5347 |
| |
| [SPARK-6750] Upgrade ScalaStyle to 0.7. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-07 12:37:33 -0700 |
| Commit: 1232215, github.com/apache/spark/pull/5399 |
| |
| Replace use of .size with .length for Arrays |
| sksamuel <sam@sksamuel.com> |
| 2015-04-07 10:43:22 -0700 |
| Commit: 2c32bef, github.com/apache/spark/pull/5376 |
| |
| [SPARK-6733][ Scheduler]Added scala.language.existentials |
| Vinod K C <vinod.kc@huawei.com> |
| 2015-04-07 10:42:08 -0700 |
| Commit: 7162ecf, github.com/apache/spark/pull/5384 |
| |
| [SPARK-3591][YARN]fire and forget for YARN cluster mode |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-04-07 08:36:25 -0500 |
| Commit: b65bad6, github.com/apache/spark/pull/5297 |
| |
| [SPARK-6736][GraphX][Doc]Example of Graph#aggregateMessages has error |
| Sasaki Toru <sasakitoa@nttdata.co.jp> |
| 2015-04-07 01:55:32 -0700 |
| Commit: ae980eb, github.com/apache/spark/pull/5388 |
| |
| [SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py |
| Matt Aasted <aasted@twitch.tv> |
| 2015-04-06 23:50:48 -0700 |
| Commit: 6f0d55d, github.com/apache/spark/pull/5302 |
| |
| [SPARK-6716] Change SparkContext.DRIVER_IDENTIFIER from <driver> to driver |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-04-06 23:33:16 -0700 |
| Commit: a0846c4, github.com/apache/spark/pull/5372 |
| |
| [Minor] [SQL] [SPARK-6729] Minor fix for DriverQuirks get |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-04-06 18:00:51 -0700 |
| Commit: e40ea87, github.com/apache/spark/pull/5378 |
| |
| [MLlib] [SPARK-6713] Iterators in columnSimilarities for mapPartitionsWithIndex |
| Reza Zadeh <reza@databricks.com> |
| 2015-04-06 13:15:01 -0700 |
| Commit: 30363ed, github.com/apache/spark/pull/5364 |
| |
| SPARK-6569 [STREAMING] Down-grade same-offset message in Kafka streaming to INFO |
| Sean Owen <sowen@cloudera.com> |
| 2015-04-06 10:18:56 +0100 |
| Commit: 9fe4125, github.com/apache/spark/pull/5366 |
| |
| [SPARK-6673] spark-shell.cmd can't start in Windows even when spark was built |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-04-06 10:11:20 +0100 |
| Commit: 49f3882, github.com/apache/spark/pull/5328 |
| |
| [SPARK-6602][Core] Update MapOutputTrackerMasterActor to MapOutputTrackerMasterEndpoint |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-05 21:57:15 -0700 |
| Commit: 0b5d028, github.com/apache/spark/pull/5371 |
| |
| [SPARK-6262][MLLIB]Implement missing methods for MultivariateStatisticalSummary |
| lewuathe <lewuathe@me.com> |
| 2015-04-05 16:13:31 -0700 |
| Commit: acffc43, github.com/apache/spark/pull/5359 |
| |
| [SPARK-6602][Core] Replace direct use of Akka with Spark RPC interface - part 1 |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-04 11:52:05 -0700 |
| Commit: f15806a, github.com/apache/spark/pull/5268 |
| |
| [SPARK-6607][SQL] Check invalid characters for Parquet schema and show error messages |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-05 00:20:43 +0800 |
| Commit: 7bca62f, github.com/apache/spark/pull/5263 |
| |
| [SQL] Use path.makeQualified in newParquet. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-04 23:26:10 +0800 |
| Commit: da25c86, github.com/apache/spark/pull/5353 |
| |
| [SPARK-6700] disable flaky test |
| Davies Liu <davies@databricks.com> |
| 2015-04-03 15:22:21 -0700 |
| Commit: 9b40c17, github.com/apache/spark/pull/5356 |
| |
| [SPARK-6647][SQL] Make trait StringComparison as BinaryPredicate and fix unit tests of string data source Filter |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-03 12:35:00 -0700 |
| Commit: 26b415e, github.com/apache/spark/pull/5309 |
| |
| [SPARK-6688] [core] Always use resolved URIs in EventLoggingListener. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-03 11:54:31 -0700 |
| Commit: 14632b7, github.com/apache/spark/pull/5340 |
| |
| Closes #3158 |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-03 11:53:07 -0700 |
| Commit: ffe8cc9 |
| |
| [SPARK-6640][Core] Fix the race condition of creating HeartbeatReceiver and retrieving HeartbeatReceiver |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-03 11:44:27 -0700 |
| Commit: 88504b7, github.com/apache/spark/pull/5306 |
| |
| [SPARK-6492][CORE] SparkContext.stop() can deadlock when DAGSchedulerEventProcessLoop dies |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-04-03 19:23:11 +0100 |
| Commit: 2c43ea3, github.com/apache/spark/pull/5277 |
| |
| [SPARK-5203][SQL] fix union with different decimal type |
| guowei2 <guowei2@asiainfo.com> |
| 2015-04-04 02:02:30 +0800 |
| Commit: c23ba81, github.com/apache/spark/pull/4004 |
| |
| [Minor][SQL] Fix typo |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-04-03 18:31:48 +0100 |
| Commit: dc6dff2, github.com/apache/spark/pull/5352 |
| |
| [SPARK-6615][MLLIB] Python API for Word2Vec |
| lewuathe <lewuathe@me.com> |
| 2015-04-03 09:49:50 -0700 |
| Commit: 512a2f1, github.com/apache/spark/pull/5296 |
| |
| [MLLIB] Remove println in LogisticRegression.scala |
| Omede Firouz <ofirouz@palantir.com> |
| 2015-04-03 10:26:43 +0100 |
| Commit: b52c7f9, github.com/apache/spark/pull/5338 |
| |
| [SPARK-6560][CORE] Do not suppress exceptions from writer.write. |
| Stephen Haberman <stephen@exigencecorp.com> |
| 2015-04-03 09:48:37 +0100 |
| Commit: b0d884f, github.com/apache/spark/pull/5223 |
| |
| [SPARK-6428] Turn on explicit type checking for public methods. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-03 01:25:02 -0700 |
| Commit: 82701ee, github.com/apache/spark/pull/5342 |
| |
| [SPARK-6575][SQL] Converted Parquet Metastore tables no longer cache metadata |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-03 14:40:36 +0800 |
| Commit: c42c3fc, github.com/apache/spark/pull/5339 |
| |
| [SPARK-6621][Core] Fix the bug that calling EventLoop.stop in EventLoop.onReceive/onError/onStart doesn't call onStop |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-02 22:54:30 -0700 |
| Commit: 440ea31, github.com/apache/spark/pull/5280 |
| |
| [SPARK-6345][STREAMING][MLLIB] Fix for training with prediction |
| freeman <the.freeman.lab@gmail.com> |
| 2015-04-02 21:37:44 -0700 |
| Commit: 6e1c1ec, github.com/apache/spark/pull/5037 |
| |
| [CORE] The descriptionof jobHistory config should be spark.history.fs.logDirectory |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-04-02 20:24:31 -0700 |
| Commit: 8a0aa81, github.com/apache/spark/pull/5332 |
| |
| [SPARK-6575][SQL] Converted Parquet Metastore tables no longer cache metadata |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-02 20:23:08 -0700 |
| Commit: 4b82bd7, github.com/apache/spark/pull/5339 |
| |
| [SPARK-6650] [core] Stop ExecutorAllocationManager when context stops. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-02 19:48:55 -0700 |
| Commit: 45134ec, github.com/apache/spark/pull/5311 |
| |
| [SPARK-6686][SQL] Use resolved output instead of names for toDF rename |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-02 18:30:55 -0700 |
| Commit: 052dee0, github.com/apache/spark/pull/5337 |
| |
| [SPARK-6243][SQL] The Operation of match did not conside the scenarios that order.dataType does not match NativeType |
| DoingDone9 <799203320@qq.com> |
| 2015-04-02 17:23:51 -0700 |
| Commit: 947802c, github.com/apache/spark/pull/4959 |
| |
| [SQL][Minor] Use analyzed logical instead of unresolved in HiveComparisonTest |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-04-02 17:20:31 -0700 |
| Commit: dfd2982, github.com/apache/spark/pull/4946 |
| |
| [SPARK-6618][SPARK-6669][SQL] Lock Hive metastore client correctly. |
| Yin Huai <yhuai@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-04-02 16:46:50 -0700 |
| Commit: 5db8912, github.com/apache/spark/pull/5333 |
| |
| [Minor] [SQL] Follow-up of PR #5210 |
| Cheng Lian <lian@databricks.com> |
| 2015-04-02 16:15:34 -0700 |
| Commit: d3944b6, github.com/apache/spark/pull/5219 |
| |
| [SPARK-6655][SQL] We need to read the schema of a data source table stored in spark.sql.sources.schema property |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-02 16:02:31 -0700 |
| Commit: 251698f, github.com/apache/spark/pull/5313 |
| |
| [SQL] Throw UnsupportedOperationException instead of NotImplementedError |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-02 16:01:03 -0700 |
| Commit: 4214e50, github.com/apache/spark/pull/5315 |
| |
| SPARK-6414: Spark driver failed with NPE on job cancelation |
| Hung Lin <hung.lin@gmail.com> |
| 2015-04-02 14:01:43 -0700 |
| Commit: e3202aa, github.com/apache/spark/pull/5124 |
| |
| [SPARK-6667] [PySpark] remove setReuseAddress |
| Davies Liu <davies@databricks.com> |
| 2015-04-02 12:18:33 -0700 |
| Commit: 0cce545, github.com/apache/spark/pull/5324 |
| |
| [SPARK-6672][SQL] convert row to catalyst in createDataFrame(RDD[Row], ...) |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-02 17:57:01 +0800 |
| Commit: 424e987, github.com/apache/spark/pull/5329 |
| |
| [SPARK-6627] Some clean-up in shuffle code. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-04-01 23:42:09 -0700 |
| Commit: 6562787, github.com/apache/spark/pull/5286 |
| |
| [SPARK-6663] [SQL] use Literal.create instread of constructor |
| Davies Liu <davies@databricks.com> |
| 2015-04-01 23:11:38 -0700 |
| Commit: 40df5d4, github.com/apache/spark/pull/5320 |
| |
| Revert "[SPARK-6618][SQL] HiveMetastoreCatalog.lookupRelation should use fine-grained lock" |
| Cheng Lian <lian@databricks.com> |
| 2015-04-02 12:56:34 +0800 |
| Commit: 2bc7fe7 |
| |
| [SPARK-6658][SQL] Update DataFrame documentation to fix type references. |
| Chet Mancini <chetmancini@gmail.com> |
| 2015-04-01 21:39:46 -0700 |
| Commit: 191524e, github.com/apache/spark/pull/5316 |
| |
| [SPARK-6578] Small rewrite to make the logic more clear in MessageWithHeader.transferTo. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-01 18:36:06 -0700 |
| Commit: 899ebcb, github.com/apache/spark/pull/5319 |
| |
| [SPARK-6660][MLLIB] pythonToJava doesn't recognize object arrays |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 18:17:07 -0700 |
| Commit: 4815bc2, github.com/apache/spark/pull/5318 |
| |
| [SPARK-6553] [pyspark] Support functools.partial as UDF |
| ksonj <kson@siberie.de> |
| 2015-04-01 17:23:57 -0700 |
| Commit: 757b2e9, github.com/apache/spark/pull/5206 |
| |
| [SPARK-6580] [MLLIB] Optimize LogisticRegressionModel.predictPoint |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-04-01 17:19:36 -0700 |
| Commit: 86b4399, github.com/apache/spark/pull/5249 |
| |
| [SPARK-6576] [MLlib] [PySpark] DenseMatrix in PySpark should support indexing |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-04-01 17:03:39 -0700 |
| Commit: 2fa3b47, github.com/apache/spark/pull/5232 |
| |
| [SPARK-6642][MLLIB] use 1.2 lambda scaling and remove addImplicit from NormalEquation |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 16:47:18 -0700 |
| Commit: ccafd75, github.com/apache/spark/pull/5314 |
| |
| [SPARK-6578] [core] Fix thread-safety issue in outbound path of network library. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-01 16:06:11 -0700 |
| Commit: f084c5d, github.com/apache/spark/pull/5234 |
| |
| [SPARK-6657] [Python] [Docs] fixed python doc build warnings |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-01 15:15:47 -0700 |
| Commit: fb25e8c, github.com/apache/spark/pull/5317 |
| |
| [SPARK-6651][MLLIB] delegate dense vector arithmetics to the underlying numpy array |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 13:29:04 -0700 |
| Commit: 2275acc, github.com/apache/spark/pull/5312 |
| |
| SPARK-6433 hive tests to import spark-sql test JAR for QueryTest access |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-04-01 16:26:54 +0100 |
| Commit: ee11be2, github.com/apache/spark/pull/5119 |
| |
| [SPARK-6608] [SQL] Makes DataFrame.rdd a lazy val |
| Cheng Lian <lian@databricks.com> |
| 2015-04-01 21:34:45 +0800 |
| Commit: d36c5fc, github.com/apache/spark/pull/5265 |
| |
| SPARK-6626 [DOCS]: Corrected Scala:TwitterUtils parameters |
| jayson <jayson@ziprecruiter.com> |
| 2015-04-01 11:12:55 +0100 |
| Commit: 0358b08, github.com/apache/spark/pull/5295 |
| |
| [SPARK-6597][Minor] Replace `input:checkbox` with `input[type="checkbox"]` in additional-metrics.js |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-04-01 11:11:56 +0100 |
| Commit: d824c11, github.com/apache/spark/pull/5254 |
| |
| [EC2] [SPARK-6600] Open ports in ec2/spark_ec2.py to allow HDFS NFS gateway |
| Florian Verhein <florian.verhein@gmail.com> |
| 2015-04-01 11:10:43 +0100 |
| Commit: 4122623, github.com/apache/spark/pull/5257 |
| |
| [SPARK-4655][Core] Split Stage into ShuffleMapStage and ResultStage subclasses |
| Ilya Ganelin <ilya.ganelin@capitalone.com>, Ilya Ganelin <ilganeli@gmail.com> |
| 2015-04-01 11:09:00 +0100 |
| Commit: ff1915e, github.com/apache/spark/pull/4708 |
| |
| [Doc] Improve Python DataFrame documentation |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 18:31:36 -0700 |
| Commit: 305abe1, github.com/apache/spark/pull/5287 |
| |
| [SPARK-6614] OutputCommitCoordinator should clear authorized committer only after authorized committer fails, not after any failure |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-31 16:18:39 -0700 |
| Commit: 3732607, github.com/apache/spark/pull/5276 |
| |
| [SPARK-5692] [MLlib] Word2Vec save/load |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-31 16:01:08 -0700 |
| Commit: 0e00f12, github.com/apache/spark/pull/5291 |
| |
| [SPARK-6633][SQL] Should be "Contains" instead of "EndsWith" when constructing sources.StringContains |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-31 13:18:07 -0700 |
| Commit: 2036bc5, github.com/apache/spark/pull/5299 |
| |
| [SPARK-5371][SQL] Propagate types after function conversion, before futher resolution |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-31 11:34:29 -0700 |
| Commit: beebb7f, github.com/apache/spark/pull/5278 |
| |
| [SPARK-6255] [MLLIB] Support multiclass classification in Python API |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-31 11:32:14 -0700 |
| Commit: b5bd75d, github.com/apache/spark/pull/5137 |
| |
| [SPARK-6598][MLLIB] Python API for IDFModel |
| lewuathe <lewuathe@me.com> |
| 2015-03-31 11:25:21 -0700 |
| Commit: 46de6c0, github.com/apache/spark/pull/5264 |
| |
| [SPARK-6145][SQL] fix ORDER BY on nested fields |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-31 11:23:18 -0700 |
| Commit: cd48ca5, github.com/apache/spark/pull/5189 |
| |
| [SPARK-6575] [SQL] Adds configuration to disable schema merging while converting metastore Parquet tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 11:21:15 -0700 |
| Commit: 8102014, github.com/apache/spark/pull/5231 |
| |
| [SPARK-6555] [SQL] Overrides equals() and hashCode() for MetastoreRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 11:18:25 -0700 |
| Commit: a7992ff, github.com/apache/spark/pull/5289 |
| |
| [SPARK-4894][mllib] Added Bernoulli option to NaiveBayes model in mllib |
| leahmcguire <lmcguire@salesforce.com>, Joseph K. Bradley <joseph@databricks.com>, Leah McGuire <lmcguire@salesforce.com> |
| 2015-03-31 11:16:55 -0700 |
| Commit: d01a6d8, github.com/apache/spark/pull/4087 |
| |
| [SPARK-6542][SQL] add CreateStruct |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-31 17:05:23 +0800 |
| Commit: a05835b, github.com/apache/spark/pull/5195 |
| |
| [SPARK-6618][SQL] HiveMetastoreCatalog.lookupRelation should use fine-grained lock |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-31 16:28:40 +0800 |
| Commit: 314afd0, github.com/apache/spark/pull/5281 |
| |
| [SPARK-6623][SQL] Alias DataFrame.na.drop and DataFrame.na.fill in Python. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 00:25:23 -0700 |
| Commit: b80a030, github.com/apache/spark/pull/5284 |
| |
| [SPARK-6625][SQL] Add common string filters to data sources. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 00:19:51 -0700 |
| Commit: f07e714, github.com/apache/spark/pull/5285 |
| |
| [SPARK-5124][Core] Move StopCoordinator to the receive method since it does not require a reply |
| zsxwing <zsxwing@gmail.com> |
| 2015-03-30 22:10:49 -0700 |
| Commit: 5677557, github.com/apache/spark/pull/5283 |
| |
| [SPARK-6119][SQL] DataFrame support for missing data handling |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-30 20:47:10 -0700 |
| Commit: b8ff2bc, github.com/apache/spark/pull/5274 |
| |
| [SPARK-6369] [SQL] Uses commit coordinator to help committing Hive and Parquet tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 07:48:37 +0800 |
| Commit: fde6945, github.com/apache/spark/pull/5139 |
| |
| [SPARK-6603] [PySpark] [SQL] add SQLContext.udf and deprecate inferSchema() and applySchema |
| Davies Liu <davies@databricks.com> |
| 2015-03-30 15:47:00 -0700 |
| Commit: f76d2e5, github.com/apache/spark/pull/5273 |
| |
| [HOTFIX][SPARK-4123]: Updated to fix bug where multiple dependencies added breaks Github output |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-30 12:48:26 -0700 |
| Commit: df35500, github.com/apache/spark/pull/5269 |
| |
| [SPARK-6592][SQL] fix filter for scaladoc to generate API doc for Row class under catalyst dir |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-30 11:54:44 -0700 |
| Commit: 32259c6, github.com/apache/spark/pull/5252 |
| |
| [SPARK-6595][SQL] MetastoreRelation should be a MultiInstanceRelation |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-30 22:24:12 +0800 |
| Commit: fe81f6c, github.com/apache/spark/pull/5251 |
| |
| [HOTFIX] Update start-slave.sh |
| Jose Manuel Gomez <jmgomez@stratio.com> |
| 2015-03-30 14:59:08 +0100 |
| Commit: 19d4c39, github.com/apache/spark/pull/5262 |
| |
| [SPARK-5750][SPARK-3441][SPARK-5836][CORE] Added documentation explaining shuffle |
| Ilya Ganelin <ilya.ganelin@capitalone.com>, Ilya Ganelin <ilganeli@gmail.com> |
| 2015-03-30 11:52:02 +0100 |
| Commit: 4bdfb7b, github.com/apache/spark/pull/5074 |
| |
| [SPARK-6596] fix the instruction on building scaladoc |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-30 11:41:43 +0100 |
| Commit: de67330, github.com/apache/spark/pull/5253 |
| |
| [spark-sql] a better exception message than "scala.MatchError" for unsupported types in Schema creation |
| Eran Medan <ehrann.mehdan@gmail.com> |
| 2015-03-30 00:02:52 -0700 |
| Commit: 17b13c5, github.com/apache/spark/pull/5235 |
| |
| Fix string interpolator error in HeartbeatReceiver |
| Li Zhihui <zhihui.li@intel.com> |
| 2015-03-29 21:30:37 -0700 |
| Commit: 01dc9f5, github.com/apache/spark/pull/5255 |
| |
| [SPARK-5124][Core] A standard RPC interface and an Akka implementation |
| zsxwing <zsxwing@gmail.com> |
| 2015-03-29 21:25:09 -0700 |
| Commit: a8d53af, github.com/apache/spark/pull/4588 |
| |
| [SPARK-6585][Tests]Fix FileServerSuite testcase in some Env. |
| June.He <jun.hejun@huawei.com> |
| 2015-03-29 12:47:22 +0100 |
| Commit: 0e2753f, github.com/apache/spark/pull/5239 |
| |
| [SPARK-6558] Utils.getCurrentUserName returns the full principal name instead of login name |
| Thomas Graves <tgraves@apache.org> |
| 2015-03-29 12:43:30 +0100 |
| Commit: 52ece26, github.com/apache/spark/pull/5229 |
| |
| [SPARK-6406] Launch Spark using assembly jar instead of a separate launcher jar |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-03-29 12:40:37 +0100 |
| Commit: e3eb393, github.com/apache/spark/pull/5085 |
| |
| [SPARK-4123][Project Infra]: Show new dependencies added in pull requests |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-29 12:37:53 +0100 |
| Commit: 55153f5, github.com/apache/spark/pull/5093 |
| |
| [DOC] Improvements to Python docs. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-28 23:59:27 -0700 |
| Commit: 5eef00d, github.com/apache/spark/pull/5238 |
| |
| [SPARK-6571][MLLIB] use wrapper in MatrixFactorizationModel.load |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-28 15:08:05 -0700 |
| Commit: f75f633, github.com/apache/spark/pull/5243 |
| |
| [SPARK-6552][Deploy][Doc]expose start-slave.sh to user and update outdated doc |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-03-28 12:32:35 +0000 |
| Commit: 9963143, github.com/apache/spark/pull/5205 |
| |
| [SPARK-6538][SQL] Add missing nullable Metastore fields when merging a Parquet schema |
| Adam Budde <budde@amazon.com> |
| 2015-03-28 09:14:09 +0800 |
| Commit: 5909f09, github.com/apache/spark/pull/5214 |
| |
| [SPARK-6564][SQL] SQLContext.emptyDataFrame should contain 0 row, not 1 row |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-27 14:56:57 -0700 |
| Commit: 3af7334, github.com/apache/spark/pull/5226 |
| |
| [SPARK-6526][ML] Add Normalizer transformer in ML package |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-03-27 13:29:10 -0700 |
| Commit: d5497ab, github.com/apache/spark/pull/5181 |
| |
| [SPARK-6574] [PySpark] fix sql example |
| Davies Liu <davies@databricks.com> |
| 2015-03-27 11:42:26 -0700 |
| Commit: 887e1b7, github.com/apache/spark/pull/5230 |
| |
| [SPARK-6550][SQL] Use analyzed plan in DataFrame |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-27 11:40:00 -0700 |
| Commit: 5d9c37c, github.com/apache/spark/pull/5217 |
| |
| [SPARK-6544][build] Increment Avro version from 1.7.6 to 1.7.7 |
| Dean Chen <deanchen5@gmail.com> |
| 2015-03-27 14:32:51 +0000 |
| Commit: aa2b991, github.com/apache/spark/pull/5193 |
| |
| [SPARK-6556][Core] Fix wrong parsing logic of executorTimeoutMs and checkTimeoutIntervalMs in HeartbeatReceiver |
| zsxwing <zsxwing@gmail.com> |
| 2015-03-27 12:31:06 +0000 |
| Commit: da546b7, github.com/apache/spark/pull/5209 |
| |
| [SPARK-6341][mllib] Upgrade breeze from 0.11.1 to 0.11.2 |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-03-27 00:15:02 -0700 |
| Commit: f43a610, github.com/apache/spark/pull/5222 |
| |
| [SPARK-6405] Limiting the maximum Kryo buffer size to be 2GB. |
| mcheah <mcheah@palantir.com> |
| 2015-03-26 22:48:42 -0700 |
| Commit: 49d2ec6, github.com/apache/spark/pull/5218 |
| |
| [SPARK-6510][GraphX]: Add Graph#minus method to act as Set#difference |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-26 19:08:09 -0700 |
| Commit: 39fb579, github.com/apache/spark/pull/5175 |
| |
| [DOCS][SQL] Fix JDBC example |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-26 14:51:46 -0700 |
| Commit: aad0032, github.com/apache/spark/pull/5192 |
| |
| [SPARK-6554] [SQL] Don't push down predicates which reference partition column(s) |
| Cheng Lian <lian@databricks.com> |
| 2015-03-26 13:11:37 -0700 |
| Commit: 71a0d40, github.com/apache/spark/pull/5210 |
| |
| [SPARK-6117] [SQL] Improvements to DataFrame.describe() |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-26 12:26:13 -0700 |
| Commit: 784fcd5, github.com/apache/spark/pull/5201 |
| |
| SPARK-6532 [BUILD] LDAModel.scala fails scalastyle on Windows |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-26 10:52:31 -0700 |
| Commit: c3a52a0, github.com/apache/spark/pull/5211 |
| |
| SPARK-6480 [CORE] histogram() bucket function is wrong in some simple edge cases |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-26 15:00:23 +0000 |
| Commit: fe15ea9, github.com/apache/spark/pull/5148 |
| |
| [MLlib]remove unused import |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-03-26 13:27:05 +0000 |
| Commit: 3ddb975, github.com/apache/spark/pull/5207 |
| |
| [SQL][SPARK-6471]: Metastore schema should only be a subset of parquet schema to support dropping of columns using replace columns |
| Yash Datta <Yash.Datta@guavus.com> |
| 2015-03-26 21:13:38 +0800 |
| Commit: 1c05027, github.com/apache/spark/pull/5141 |
| |
| [SPARK-6468][Block Manager] Fix the race condition of subDirs in DiskBlockManager |
| zsxwing <zsxwing@gmail.com> |
| 2015-03-26 12:54:48 +0000 |
| Commit: 0c88ce5, github.com/apache/spark/pull/5136 |
| |
| [SPARK-6465][SQL] Fix serialization of GenericRowWithSchema using kryo |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-26 18:46:57 +0800 |
| Commit: f88f51b, github.com/apache/spark/pull/5191 |
| |
| [SPARK-6546][Build] Using the wrong code that will make spark compile failed!! |
| DoingDone9 <799203320@qq.com> |
| 2015-03-26 17:04:19 +0800 |
| Commit: 855cba8, github.com/apache/spark/pull/5198 |
| |
| [SPARK-6117] [SQL] add describe function to DataFrame for summary statis... |
| azagrebin <azagrebin@gmail.com> |
| 2015-03-26 00:25:04 -0700 |
| Commit: 5bbcd13, github.com/apache/spark/pull/5073 |
| |
| [SPARK-6536] [PySpark] Column.inSet() in Python |
| Davies Liu <davies@databricks.com> |
| 2015-03-26 00:01:24 -0700 |
| Commit: f535802, github.com/apache/spark/pull/5190 |
| |
| [SPARK-6463][SQL] AttributeSet.equal should compare size |
| sisihj <jun.hejun@huawei.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-25 19:21:54 -0700 |
| Commit: 276ef1c, github.com/apache/spark/pull/5194 |
| |
| The UT test of spark is failed. Because there is a test in SQLQuerySuite about creating table ātestā |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-03-25 19:15:30 -0700 |
| Commit: e87bf37, github.com/apache/spark/pull/5150 |
| |
| [SPARK-6202] [SQL] enable variable substitution on test framework |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-25 18:43:26 -0700 |
| Commit: 5ab6e9f, github.com/apache/spark/pull/4930 |
| |
| [SPARK-6271][SQL] Sort these tokens in alphabetic order to avoid further duplicate in HiveQl |
| DoingDone9 <799203320@qq.com> |
| 2015-03-25 18:41:59 -0700 |
| Commit: 328daf6, github.com/apache/spark/pull/4973 |
| |
| [SPARK-6326][SQL] Improve castStruct to be faster |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-25 17:52:23 -0700 |
| Commit: 73d5775, github.com/apache/spark/pull/5017 |
| |
| [SPARK-5498][SQL]fix query exception when partition schema does not match table schema |
| jeanlyn <jeanlyn92@gmail.com> |
| 2015-03-25 17:47:45 -0700 |
| Commit: e6d1406, github.com/apache/spark/pull/4289 |
| |
| [SPARK-6450] [SQL] Fixes metastore Parquet table conversion |
| Cheng Lian <lian@databricks.com> |
| 2015-03-25 17:40:19 -0700 |
| Commit: 8c3b005, github.com/apache/spark/pull/5183 |
| |
| [SPARK-6079] Use index to speed up StatusTracker.getJobIdsForGroup() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-25 17:40:00 -0700 |
| Commit: d44a336, github.com/apache/spark/pull/4830 |
| |
| [SPARK-5987] [MLlib] Save/load for GaussianMixtureModels |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-25 14:45:23 -0700 |
| Commit: 4fc4d03, github.com/apache/spark/pull/4986 |
| |
| [SPARK-6256] [MLlib] MLlib Python API parity check for regression |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-25 13:38:33 -0700 |
| Commit: 4353373, github.com/apache/spark/pull/4997 |
| |
| [SPARK-5771] Master UI inconsistently displays application cores |
| Andrew Or <andrew@databricks.com> |
| 2015-03-25 13:28:32 -0700 |
| Commit: c1b74df, github.com/apache/spark/pull/5177 |
| |
| [SPARK-6537] UIWorkloadGenerator: The main thread should not stop SparkContext until all jobs finish |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-03-25 13:27:15 -0700 |
| Commit: acef51d, github.com/apache/spark/pull/5187 |
| |
| [SPARK-6076][Block Manager] Fix a potential OOM issue when StorageLevel is MEMORY_AND_DISK_SER |
| zsxwing <zsxwing@gmail.com> |
| 2015-03-25 12:09:30 -0700 |
| Commit: 883b7e9, github.com/apache/spark/pull/4827 |
| |
| [SPARK-6409][SQL] It is not necessary that avoid old inteface of hive, because this will make some UDAF can not work. |
| DoingDone9 <799203320@qq.com> |
| 2015-03-25 11:11:52 -0700 |
| Commit: 968408b, github.com/apache/spark/pull/5131 |
| |
| [ML][FEATURE] SPARK-5566: RegEx Tokenizer |
| Augustin Borsu <augustin@sagacify.com>, Augustin Borsu <a.borsu@gmail.com>, Augustin Borsu <aborsu@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-03-25 10:16:39 -0700 |
| Commit: 982952f, github.com/apache/spark/pull/4504 |
| |
| [SPARK-6496] [MLLIB] GeneralizedLinearAlgorithm.run(input, initialWeights) should initialize numFeatures |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-25 17:05:56 +0000 |
| Commit: 10c7860, github.com/apache/spark/pull/5167 |
| |
| [SPARK-6483][SQL]Improve ScalaUdf called performance. |
| zzcclp <xm_zzc@sina.com> |
| 2015-03-25 19:11:04 +0800 |
| Commit: 64262ed, github.com/apache/spark/pull/5154 |
| |
| [DOCUMENTATION]Fixed Missing Type Import in Documentation |
| Bill Chambers <wchambers@ischool.berkeley.edu>, anabranch <wac.chambers@gmail.com> |
| 2015-03-24 22:24:35 -0700 |
| Commit: c5cc414, github.com/apache/spark/pull/5179 |
| |
| [SPARK-6515] update OpenHashSet impl |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-24 18:58:27 -0700 |
| Commit: c14ddd9, github.com/apache/spark/pull/5176 |
| |
| [SPARK-6428][Streaming] Added explicit types for all public methods. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-24 17:08:25 -0700 |
| Commit: 9459865, github.com/apache/spark/pull/5110 |
| |
| [SPARK-6512] add contains to OpenHashMap |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-24 17:06:22 -0700 |
| Commit: 6930e96, github.com/apache/spark/pull/5171 |
| |
| [SPARK-6469] Improving documentation on YARN local directories usage |
| Christophe PrƩaud <christophe.preaud@kelkoo.com> |
| 2015-03-24 17:05:49 -0700 |
| Commit: 05c2214, github.com/apache/spark/pull/5165 |
| |
| Revert "[SPARK-5771] Number of Cores in Completed Applications of Standalone Master Web Page always be 0 if sc.stop() is called" |
| Andrew Or <andrew@databricks.com> |
| 2015-03-24 16:49:27 -0700 |
| Commit: dd907d1 |
| |
| Revert "[SPARK-5771][UI][hotfix] Change Requested Cores into * if default cores is not set" |
| Andrew Or <andrew@databricks.com> |
| 2015-03-24 16:41:31 -0700 |
| Commit: f7c3668 |
| |
| [SPARK-3570] Include time to open files in shuffle write time. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-03-24 16:29:40 -0700 |
| Commit: d8ccf65, github.com/apache/spark/pull/4550 |
| |
| [SPARK-6088] Correct how tasks that get remote results are shown in UI. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-03-24 16:26:43 -0700 |
| Commit: 6948ab6, github.com/apache/spark/pull/4839 |
| |
| [SPARK-6428][SQL] Added explicit types for all public methods in catalyst |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-24 16:03:55 -0700 |
| Commit: 7334801, github.com/apache/spark/pull/5162 |
| |
| [SPARK-6209] Clean up connections in ExecutorClassLoader after failing to load classes (master branch PR) |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-24 14:38:20 -0700 |
| Commit: 7215aa74, github.com/apache/spark/pull/4944 |
| |
| [SPARK-6458][SQL] Better error messages for invalid data sources |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 14:10:56 -0700 |
| Commit: a8f51b8, github.com/apache/spark/pull/5158 |
| |
| [SPARK-6376][SQL] Avoid eliminating subqueries until optimization |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 14:08:20 -0700 |
| Commit: cbeaf9e, github.com/apache/spark/pull/5160 |
| |
| [SPARK-6375][SQL] Fix formatting of error messages. |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 13:22:46 -0700 |
| Commit: 046c1e2, github.com/apache/spark/pull/5155 |
| |
| [SPARK-6054][SQL] Fix transformations of TreeNodes that hold StructTypes |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:28:01 -0700 |
| Commit: 3fa3d12, github.com/apache/spark/pull/5157 |
| |
| [SPARK-6437][SQL] Use completion iterator to close external sorter |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:10:30 -0700 |
| Commit: 26c6ce3, github.com/apache/spark/pull/5161 |
| |
| [SPARK-6459][SQL] Warn when constructing trivially true equals predicate |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:09:02 -0700 |
| Commit: 32efadd, github.com/apache/spark/pull/5163 |
| |
| [SPARK-6361][SQL] support adding a column with metadata in DF |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-24 12:08:19 -0700 |
| Commit: 6bdddb6, github.com/apache/spark/pull/5151 |
| |
| [SPARK-6475][SQL] recognize array types when infer data types from JavaBeans |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-24 10:11:27 -0700 |
| Commit: a1d1529, github.com/apache/spark/pull/5146 |
| |
| [ML][docs][minor] Define LabeledDocument/Document classes in CV example |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-03-24 16:33:38 +0000 |
| Commit: 08d4528, github.com/apache/spark/pull/5135 |
| |
| [SPARK-5559] [Streaming] [Test] Remove oppotunity we met flakiness when running FlumeStreamSuite |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-03-24 16:13:25 +0000 |
| Commit: 85cf063, github.com/apache/spark/pull/4337 |
| |
| [SPARK-6473] [core] Do not try to figure out Scala version if not needed... |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-24 13:48:33 +0000 |
| Commit: b293afc, github.com/apache/spark/pull/5143 |
| |
| Update the command to use IPython notebook |
| Cong Yue <yuecong1104@gmail.com> |
| 2015-03-24 12:56:13 +0000 |
| Commit: c12312f, github.com/apache/spark/pull/5111 |
| |
| [SPARK-6477][Build]: Run MIMA tests before the Spark test suite |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-24 10:33:04 +0000 |
| Commit: 37fac1d, github.com/apache/spark/pull/5145 |
| |
| [SPARK-6452] [SQL] Checks for missing attributes and unresolved operator for all types of operator |
| Cheng Lian <lian@databricks.com> |
| 2015-03-24 01:12:11 -0700 |
| Commit: 1afcf77, github.com/apache/spark/pull/5129 |
| |
| [SPARK-6428] Added explicit types for all public methods in core. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-23 23:41:06 -0700 |
| Commit: 4ce2782, github.com/apache/spark/pull/5125 |
| |
| [SPARK-6124] Support jdbc connection properties in OPTIONS part of the query |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-23 17:00:27 -0700 |
| Commit: bfd3ee9, github.com/apache/spark/pull/4859 |
| |
| Revert "[SPARK-6122][Core] Upgrade Tachyon client version to 0.6.1." |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-23 15:08:39 -0700 |
| Commit: 6cd7058 |
| |
| [SPARK-6308] [MLlib] [Sql] Override TypeName in VectorUDT and MatrixUDT |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-23 13:30:21 -0700 |
| Commit: 474d132, github.com/apache/spark/pull/5118 |
| |
| [SPARK-6397][SQL] Check the missingInput simply |
| Yadong Qi <qiyadong2010@gmail.com> |
| 2015-03-23 18:16:49 +0800 |
| Commit: 9f3273b, github.com/apache/spark/pull/5132 |
| |
| Revert "[SPARK-6397][SQL] Check the missingInput simply" |
| Cheng Lian <lian@databricks.com> |
| 2015-03-23 12:15:19 +0800 |
| Commit: bf044de |
| |
| [SPARK-6397][SQL] Check the missingInput simply |
| q00251598 <qiyadong@huawei.com> |
| 2015-03-23 12:06:13 +0800 |
| Commit: e566fe5, github.com/apache/spark/pull/5082 |
| |
| [SPARK-4985] [SQL] parquet support for date type |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-23 11:46:16 +0800 |
| Commit: 4659468, github.com/apache/spark/pull/3822 |
| |
| [SPARK-6337][Documentation, SQL]Spark 1.3 doc fixes |
| vinodkc <vinod.kc.in@gmail.com> |
| 2015-03-22 20:00:08 +0000 |
| Commit: 2bf40c5, github.com/apache/spark/pull/5112 |
| |
| [HOTFIX] Build break due to https://github.com/apache/spark/pull/5128 |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-22 12:08:15 -0700 |
| Commit: 7a0da47 |
| |
| [SPARK-6122][Core] Upgrade Tachyon client version to 0.6.1. |
| Calvin Jia <jia.calvin@gmail.com> |
| 2015-03-22 11:11:29 -0700 |
| Commit: a41b9c6, github.com/apache/spark/pull/4867 |
| |
| SPARK-6454 [DOCS] Fix links to pyspark api |
| Kamil Smuga <smugakamil@gmail.com>, stderr <smugakamil@gmail.com> |
| 2015-03-22 15:56:25 +0000 |
| Commit: 6ef4863, github.com/apache/spark/pull/5120 |
| |
| [SPARK-6453][Mesos] Some Mesos*Suite have a different package with their classes |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-22 15:53:18 +0000 |
| Commit: adb2ff7, github.com/apache/spark/pull/5126 |
| |
| [SPARK-6455] [docs] Correct some mistakes and typos |
| Hangchen Yu <yuhc@gitcafe.com> |
| 2015-03-22 15:51:10 +0000 |
| Commit: ab4f516, github.com/apache/spark/pull/5128 |
| |
| [SPARK-6448] Make history server log parse exceptions |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-03-22 11:54:23 +0000 |
| Commit: b9fe504, github.com/apache/spark/pull/5122 |
| |
| [SPARK-6408] [SQL] Fix JDBCRDD filtering string literals |
| ypcat <ypcat6@gmail.com>, Pei-Lun Lee <pllee@appier.com> |
| 2015-03-22 15:49:13 +0800 |
| Commit: 9b1e1f2, github.com/apache/spark/pull/5087 |
| |
| [SPARK-6428][SQL] Added explicit type for all public methods for Hive module |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-21 14:30:04 -0700 |
| Commit: b6090f9, github.com/apache/spark/pull/5108 |
| |
| [SPARK-6250][SPARK-6146][SPARK-5911][SQL] Types are now reserved words in DDL parser. |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-21 13:27:53 -0700 |
| Commit: 94a102a, github.com/apache/spark/pull/5078 |
| |
| [SPARK-5680][SQL] Sum function on all null values, should return zero |
| Venkata Ramana G <ramana.gollamudihuawei.com>, Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-03-21 13:24:24 -0700 |
| Commit: ee569a0, github.com/apache/spark/pull/4466 |
| |
| [SPARK-5320][SQL]Add statistics method at NoRelation (override super). |
| x1- <viva008@gmail.com> |
| 2015-03-21 13:22:34 -0700 |
| Commit: 52dd4b2, github.com/apache/spark/pull/5105 |
| |
| [SPARK-5821] [SQL] JSON CTAS command should throw error message when delete path failure |
| Yanbo Liang <ybliang8@gmail.com>, Yanbo Liang <yanbohappy@gmail.com> |
| 2015-03-21 11:23:28 +0800 |
| Commit: e5d2c37, github.com/apache/spark/pull/4610 |
| |
| [SPARK-6315] [SQL] Also tries the case class string parser while reading Parquet schema |
| Cheng Lian <lian@databricks.com> |
| 2015-03-21 11:18:45 +0800 |
| Commit: 937c1e5, github.com/apache/spark/pull/5034 |
| |
| [SPARK-5821] [SQL] ParquetRelation2 CTAS should check if delete is successful |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-21 10:53:04 +0800 |
| Commit: bc37c97, github.com/apache/spark/pull/5107 |
| |
| [SPARK-6025] [MLlib] Add helper method evaluateEachIteration to extract learning curve |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-20 17:14:09 -0700 |
| Commit: 25e271d, github.com/apache/spark/pull/4906 |
| |
| [SPARK-6428][SQL] Added explicit type for all public methods in sql/core |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-20 15:47:07 -0700 |
| Commit: a95043b, github.com/apache/spark/pull/5104 |
| |
| [SPARK-6421][MLLIB] _regression_train_wrapper does not test initialWeights correctly |
| lewuathe <lewuathe@me.com> |
| 2015-03-20 17:18:18 -0400 |
| Commit: 257cde7, github.com/apache/spark/pull/5101 |
| |
| [SPARK-6309] [SQL] [MLlib] Implement MatrixUDT |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-20 17:13:18 -0400 |
| Commit: 11e0259, github.com/apache/spark/pull/5048 |
| |
| [SPARK-6423][Mesos] MemoryUtils should use memoryOverhead if it's set |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-20 19:14:35 +0000 |
| Commit: 49a01c7, github.com/apache/spark/pull/5099 |
| |
| [SPARK-5955][MLLIB] add checkpointInterval to ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-20 15:02:57 -0400 |
| Commit: 6b36470, github.com/apache/spark/pull/5076 |
| |
| [Spark 6096][MLlib] Add Naive Bayes load save methods in Python |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-03-20 14:53:59 -0400 |
| Commit: 25636d9, github.com/apache/spark/pull/5090 |
| |
| [MLlib] SPARK-5954: Top by key |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-03-20 14:45:44 -0400 |
| Commit: 5e6ad24, github.com/apache/spark/pull/5075 |
| |
| [SPARK-6095] [MLLIB] Support model save/load in Python's linear models |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-20 14:44:21 -0400 |
| Commit: 48866f7, github.com/apache/spark/pull/5016 |
| |
| [SPARK-6371] [build] Update version to 1.4.0-SNAPSHOT. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-20 18:43:57 +0000 |
| Commit: a745645, github.com/apache/spark/pull/5056 |
| |
| [SPARK-6426][Doc]User could also point the yarn cluster config directory via YARN_CONF_DI... |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-03-20 18:42:18 +0000 |
| Commit: 385b2ff, github.com/apache/spark/pull/5103 |
| |
| [SPARK-6370][core] Documentation: Improve all 3 docs for RDD.sample |
| mbonaci <mbonaci@gmail.com> |
| 2015-03-20 18:30:45 +0000 |
| Commit: 28bcb9e, github.com/apache/spark/pull/5097 |
| |
| [SPARK-6428][MLlib] Added explicit type for public methods and implemented hashCode when equals is defined. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-20 14:13:02 -0400 |
| Commit: db4d317, github.com/apache/spark/pull/5102 |
| |
| SPARK-6338 [CORE] Use standard temp dir mechanisms in tests to avoid orphaned temp files |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-20 14:16:21 +0000 |
| Commit: 6f80c3e, github.com/apache/spark/pull/5029 |
| |
| SPARK-5134 [BUILD] Bump default Hadoop version to 2+ |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-20 14:14:53 +0000 |
| Commit: d08e3eb, github.com/apache/spark/pull/5027 |
| |
| [SPARK-6286][Mesos][minor] Handle missing Mesos case TASK_ERROR |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-20 12:24:34 +0000 |
| Commit: 116c553, github.com/apache/spark/pull/5088 |
| |
| Tighten up field/method visibility in Executor and made some code more clear to read. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-19 22:12:01 -0400 |
| Commit: 0745a30, github.com/apache/spark/pull/4850 |
| |
| [SPARK-6219] [Build] Check that Python code compiles |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-03-19 12:46:10 -0700 |
| Commit: f17d43b, github.com/apache/spark/pull/4941 |
| |
| [Core][minor] remove unused `visitedStages` in `DAGScheduler.stageDependsOn` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-03-19 15:25:32 -0400 |
| Commit: 3b5aaa6, github.com/apache/spark/pull/5086 |
| |
| [SPARK-5313][Project Infra]: Create simple framework for highlighting changes introduced in a PR |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-19 11:18:24 -0400 |
| Commit: 8cb23a1, github.com/apache/spark/pull/5072 |
| |
| [SPARK-6291] [MLLIB] GLM toString & toDebugString |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-19 11:10:20 -0400 |
| Commit: dda4ded, github.com/apache/spark/pull/5038 |
| |
| [SPARK-5843] [API] Allowing map-side combine to be specified in Java. |
| mcheah <mcheah@palantir.com> |
| 2015-03-19 08:51:49 -0400 |
| Commit: 3c4e486, github.com/apache/spark/pull/4634 |
| |
| [SPARK-6402][DOC] - Remove some refererences to shark in docs and ec2 |
| Pierre Borckmans <pierre.borckmans@realimpactanalytics.com> |
| 2015-03-19 08:02:06 -0400 |
| Commit: 797f8a0, github.com/apache/spark/pull/5083 |
| |
| [SPARK-4012] stop SparkContext when the exception is thrown from an infinite loop |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-18 23:48:45 -0700 |
| Commit: 2c3f83c, github.com/apache/spark/pull/5004 |
| |
| [SPARK-6222][Streaming] Dont delete checkpoint data when doing pre-batch-start checkpoint |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-19 02:15:50 -0400 |
| Commit: 645cf3f, github.com/apache/spark/pull/5008 |
| |
| [SPARK-6394][Core] cleanup BlockManager companion object and improve the getCacheLocs method in DAGScheduler |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-03-18 19:43:04 -0700 |
| Commit: 540b2a4, github.com/apache/spark/pull/5043 |
| |
| SPARK-6085 Part. 2 Increase default value for memory overhead |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-18 20:54:22 -0400 |
| Commit: 3db1387, github.com/apache/spark/pull/5065 |
| |
| [SPARK-6374] [MLlib] add get for GeneralizedLinearAlgo |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-03-18 13:44:37 -0400 |
| Commit: a95ee24, github.com/apache/spark/pull/5058 |
| |
| [SPARK-6325] [core,yarn] Do not change target executor count when killing executors. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-18 09:18:28 -0400 |
| Commit: 981fbaf, github.com/apache/spark/pull/5018 |
| |
| [SPARK-6286][minor] Handle missing Mesos case TASK_ERROR. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-03-18 09:15:33 -0400 |
| Commit: 9d112a9, github.com/apache/spark/pull/5000 |
| |
| SPARK-6389 YARN app diagnostics report doesn't report NPEs |
| Steve Loughran <stevel@hortonworks.com> |
| 2015-03-18 09:09:32 -0400 |
| Commit: e09c852, github.com/apache/spark/pull/5070 |
| |
| [SPARK-6372] [core] Propagate --conf to child processes. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-18 09:06:57 -0400 |
| Commit: 6205a25, github.com/apache/spark/pull/5057 |
| |
| [SPARK-6247][SQL] Fix resolution of ambiguous joins caused by new aliases |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-17 19:47:51 -0700 |
| Commit: 3579003, github.com/apache/spark/pull/5062 |
| |
| [SPARK-5651][SQL] Add input64 in blacklist and add test suit for create table within backticks |
| watermen <qiyadong2010@gmail.com>, q00251598 <qiyadong@huawei.com> |
| 2015-03-17 19:35:18 -0700 |
| Commit: a6ee2f7, github.com/apache/spark/pull/4427 |
| |
| [SPARK-5404] [SQL] Update the default statistic number |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-03-17 19:32:38 -0700 |
| Commit: 78cb08a, github.com/apache/spark/pull/4914 |
| |
| [SPARK-5908][SQL] Resolve UdtfsAlias when only single Alias is used |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-17 18:58:52 -0700 |
| Commit: 5c80643, github.com/apache/spark/pull/4692 |
| |
| [SPARK-6383][SQL]Fixed compiler and errors in Dataframe examples |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-03-17 18:50:19 -0700 |
| Commit: a012e08, github.com/apache/spark/pull/5068 |
| |
| [SPARK-6366][SQL] In Python API, the default save mode for save and saveAsTable should be "error" instead of "append". |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-18 09:41:06 +0800 |
| Commit: dc9c919, github.com/apache/spark/pull/5053 |
| |
| [SPARK-6330] [SQL] Add a test case for SPARK-6330 |
| Pei-Lun Lee <pllee@appier.com> |
| 2015-03-18 08:34:46 +0800 |
| Commit: 4633a87, github.com/apache/spark/pull/5039 |
| |
| [SPARK-6226][MLLIB] add save/load in PySpark's KMeansModel |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-17 12:14:40 -0700 |
| Commit: c94d062, github.com/apache/spark/pull/5049 |
| |
| [SPARK-6336] LBFGS should document what convergenceTol means |
| lewuathe <lewuathe@me.com> |
| 2015-03-17 12:11:57 -0700 |
| Commit: d9f3e01, github.com/apache/spark/pull/5033 |
| |
| [SPARK-6313] Add config option to disable file locks/fetchFile cache to ... |
| nemccarthy <nathan@nemccarthy.me> |
| 2015-03-17 09:33:11 -0700 |
| Commit: 4cca391, github.com/apache/spark/pull/5036 |
| |
| [SPARK-3266] Use intermediate abstract classes to fix type erasure issues in Java APIs |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-17 09:18:57 -0700 |
| Commit: 0f673c2, github.com/apache/spark/pull/5050 |
| |
| [SPARK-6365] jetty-security also needed for SPARK_PREPEND_CLASSES to work |
| Imran Rashid <irashid@cloudera.com> |
| 2015-03-17 09:41:06 -0500 |
| Commit: e9f22c6, github.com/apache/spark/pull/5052 |
| |
| [SPARK-6331] Load new master URL if present when recovering streaming context from checkpoint |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-17 05:31:27 -0700 |
| Commit: c928796, github.com/apache/spark/pull/5024 |
| |
| [docs] [SPARK-4820] Spark build encounters "File name too long" on some encrypted filesystems |
| Theodore Vasiloudis <tvas@sics.se> |
| 2015-03-17 11:25:01 +0000 |
| Commit: e26db9be, github.com/apache/spark/pull/5041 |
| |
| [SPARK-6269] [CORE] Use ScalaRunTime's array methods instead of java.lang.reflect.Array in size estimation |
| mcheah <mcheah@palantir.com>, Justin Uang <justin.uang@gmail.com> |
| 2015-03-17 11:20:20 +0000 |
| Commit: 005d1c5, github.com/apache/spark/pull/4972 |
| |
| [SPARK-4011] tighten the visibility of the members in Master/Worker class |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-17 11:18:27 +0000 |
| Commit: 25f3580, github.com/apache/spark/pull/4844 |
| |
| SPARK-6044 [CORE] RDD.aggregate() should not use the closure serializer on the zero value |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-16 23:58:52 -0700 |
| Commit: b2d8c02, github.com/apache/spark/pull/5028 |
| |
| [SPARK-6357][GraphX] Add unapply in EdgeContext |
| Takeshi YAMAMURO <linguin.m.s@gmail.com> |
| 2015-03-16 23:54:54 -0700 |
| Commit: b3e6eca, github.com/apache/spark/pull/5047 |
| |
| [SQL][docs][minor] Fixed sample code in SQLContext scaladoc |
| Lomig MeĢgard <lomig.megard@gmail.com> |
| 2015-03-16 23:52:42 -0700 |
| Commit: 6870722, github.com/apache/spark/pull/5051 |
| |
| [SPARK-6299][CORE] ClassNotFoundException in standalone mode when running groupByKey with class defined in REPL |
| Kevin (Sangwoo) Kim <sangwookim.me@gmail.com> |
| 2015-03-16 23:49:23 -0700 |
| Commit: f0edeae, github.com/apache/spark/pull/5046 |
| |
| [SPARK-5712] [SQL] fix comment with semicolon at end |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-17 12:29:15 +0800 |
| Commit: 9667b9f, github.com/apache/spark/pull/4500 |
| |
| [SPARK-6327] [PySpark] fix launch spark-submit from python |
| Davies Liu <davies@databricks.com> |
| 2015-03-16 16:26:55 -0700 |
| Commit: e3f315a, github.com/apache/spark/pull/5019 |
| |
| [SPARK-6077] Remove streaming tab while stopping StreamingContext |
| lisurprise <zhichao.li@intel.com> |
| 2015-03-16 13:10:32 -0700 |
| Commit: f149b8b, github.com/apache/spark/pull/4828 |
| |
| [SPARK-6330] Fix filesystem bug in newParquet relation |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-16 12:13:18 -0700 |
| Commit: d19efed, github.com/apache/spark/pull/5020 |
| |
| [SPARK-2087] [SQL] Multiple thriftserver sessions with single HiveContext instance |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-03-17 01:09:27 +0800 |
| Commit: 12a345a, github.com/apache/spark/pull/4885 |
| |
| [SPARK-6300][Spark Core] sc.addFile(path) does not support the relative path. |
| DoingDone9 <799203320@qq.com> |
| 2015-03-16 12:27:15 +0000 |
| Commit: 00e730b, github.com/apache/spark/pull/4993 |
| |
| [SPARK-5922][GraphX]: Add diff(other: RDD[VertexId, VD]) in VertexRDD |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-16 01:06:26 -0700 |
| Commit: 45f4c66, github.com/apache/spark/pull/4733 |
| |
| [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688 |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-15 15:46:55 +0000 |
| Commit: aa6536f, github.com/apache/spark/pull/4361 |
| |
| [SPARK-6285][SQL]Remove ParquetTestData in SparkBuild.scala and in README.md |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-03-15 20:44:45 +0800 |
| Commit: 62ede53, github.com/apache/spark/pull/5032 |
| |
| [SPARK-5790][GraphX]: VertexRDD's won't zip properly for `diff` capability (added tests) |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-14 17:38:12 +0000 |
| Commit: c49d156, github.com/apache/spark/pull/5023 |
| |
| [SPARK-6329][Docs]: Minor doc changes for Mesos and TOC |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-14 17:28:13 +0000 |
| Commit: 127268b, github.com/apache/spark/pull/5022 |
| |
| [SPARK-6195] [SQL] Adds in-memory column type for fixed-precision decimals |
| Cheng Lian <lian@databricks.com> |
| 2015-03-14 19:53:54 +0800 |
| Commit: 5be6b0e, github.com/apache/spark/pull/4938 |
| |
| [SQL]Delete some dupliate code in HiveThriftServer2 |
| ArcherShao <ArcherShao@users.noreply.github.com>, ArcherShao <shaochuan@huawei.com> |
| 2015-03-14 08:27:18 +0000 |
| Commit: ee15404, github.com/apache/spark/pull/5007 |
| |
| [SPARK-6210] [SQL] use prettyString as column name in agg() |
| Davies Liu <davies@databricks.com> |
| 2015-03-14 00:43:33 -0700 |
| Commit: b38e073, github.com/apache/spark/pull/5006 |
| |
| [SPARK-6317][SQL]Fixed HIVE console startup issue |
| vinodkc <vinod.kc.in@gmail.com>, Vinod K C <vinod.kc@huawei.com> |
| 2015-03-14 07:17:54 +0800 |
| Commit: e360d5e, github.com/apache/spark/pull/5011 |
| |
| [SPARK-6285] [SQL] Removes unused ParquetTestData and duplicated TestGroupWriteSupport |
| Cheng Lian <lian@databricks.com> |
| 2015-03-14 07:09:53 +0800 |
| Commit: cdc34ed, github.com/apache/spark/pull/5010 |
| |
| [SPARK-4600][GraphX]: org.apache.spark.graphx.VertexRDD.diff does not work |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-13 18:48:31 +0000 |
| Commit: b943f5d, github.com/apache/spark/pull/5015 |
| |
| [SPARK-6278][MLLIB] Mention the change of objective in linear regression |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-13 10:27:28 -0700 |
| Commit: 7f13434, github.com/apache/spark/pull/4978 |
| |
| [SPARK-6252] [mllib] Added getLambda to Scala NaiveBayes |
| Joseph K. Bradley <joseph.kurata.bradley@gmail.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-03-13 10:26:09 -0700 |
| Commit: dc4abd4, github.com/apache/spark/pull/4969 |
| |
| [CORE][minor] remove unnecessary ClassTag in `DAGScheduler` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-03-13 14:08:56 +0000 |
| Commit: ea3d2ee, github.com/apache/spark/pull/4992 |
| |
| [SPARK-6197][CORE] handle json exception when hisotry file not finished writing |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-03-13 13:59:54 +0000 |
| Commit: 9048e81, github.com/apache/spark/pull/4927 |
| |
| [SPARK-5310] [SQL] [DOC] Parquet section for the SQL programming guide |
| Cheng Lian <lian@databricks.com> |
| 2015-03-13 21:34:50 +0800 |
| Commit: 69ff8e8, github.com/apache/spark/pull/5001 |
| |
| [SPARK-5845][Shuffle] Time to cleanup spilled shuffle files not included in shuffle write time |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-03-13 13:21:04 +0000 |
| Commit: 0af9ea7, github.com/apache/spark/pull/4965 |
| |
| HOTFIX: Changes to release script. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-12 18:36:17 -0700 |
| Commit: 3980ebd |
| |
| [mllib] [python] Add LassoModel to __all__ in regression.py |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-03-12 16:46:29 -0700 |
| Commit: 17c309c, github.com/apache/spark/pull/4970 |
| |
| [SPARK-4588] ML Attributes |
| Xiangrui Meng <meng@databricks.com>, Sean Owen <sowen@cloudera.com> |
| 2015-03-12 16:34:56 -0700 |
| Commit: a4b2716, github.com/apache/spark/pull/4925 |
| |
| [SPARK-6268][MLlib] KMeans parameter getter methods |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-03-12 15:17:46 -0700 |
| Commit: fb4787c, github.com/apache/spark/pull/4974 |
| |
| [build] [hotfix] Fix make-distribution.sh for Scala 2.11. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-12 19:16:58 +0000 |
| Commit: 8f1bc79, github.com/apache/spark/pull/5002 |
| |
| [SPARK-6275][Documentation]Miss toDF() function in docs/sql-programming-guide.md |
| zzcclp <xm_zzc@sina.com> |
| 2015-03-12 15:07:15 +0000 |
| Commit: 304366c, github.com/apache/spark/pull/4977 |
| |
| [docs] [SPARK-6306] Readme points to dead link |
| Theodore Vasiloudis <tvas@sics.se> |
| 2015-03-12 15:01:33 +0000 |
| Commit: 4e47d54, github.com/apache/spark/pull/4999 |
| |
| [SPARK-5814][MLLIB][GRAPHX] Remove JBLAS from runtime |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-12 01:39:04 -0700 |
| Commit: 0cba802, github.com/apache/spark/pull/4699 |
| |
| [SPARK-6294] fix hang when call take() in JVM on PythonRDD |
| Davies Liu <davies@databricks.com> |
| 2015-03-12 01:34:38 -0700 |
| Commit: 712679a, github.com/apache/spark/pull/4987 |
| |
| [SPARK-6296] [SQL] Added equals to Column |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-12 00:55:26 -0700 |
| Commit: 25b71d8, github.com/apache/spark/pull/4988 |
| |
| BUILD: Adding more known contributor names |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-11 22:24:08 -0700 |
| Commit: e921a66 |
| |
| [SPARK-6128][Streaming][Documentation] Updates to Spark Streaming Programming Guide |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-11 18:48:21 -0700 |
| Commit: cd3b68d, github.com/apache/spark/pull/4956 |
| |
| [SPARK-6274][Streaming][Examples] Added examples streaming + sql examples. |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-11 11:19:51 -0700 |
| Commit: 51a79a7, github.com/apache/spark/pull/4975 |
| |
| SPARK-6245 [SQL] jsonRDD() of empty RDD results in exception |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-11 14:09:09 +0000 |
| Commit: 55c4831, github.com/apache/spark/pull/4971 |
| |
| SPARK-3642. Document the nuances of shared variables. |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-03-11 13:22:05 +0000 |
| Commit: 2d87a41, github.com/apache/spark/pull/2490 |
| |
| [SPARK-4423] Improve foreach() documentation to avoid confusion between local- and cluster-mode behavior |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-03-11 13:20:15 +0000 |
| Commit: 548643a, github.com/apache/spark/pull/4696 |
| |
| [SPARK-6228] [network] Move SASL classes from network/shuffle to network... |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-11 13:16:22 +0000 |
| Commit: 5b335bd, github.com/apache/spark/pull/4953 |
| |
| SPARK-6225 [CORE] [SQL] [STREAMING] Resolve most build warnings, 1.3.0 edition |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-11 13:15:19 +0000 |
| Commit: 6e94c4e, github.com/apache/spark/pull/4950 |
| |
| [SPARK-6279][Streaming]In KafkaRDD.scala, Miss expressions flag "s" at logging string |
| zzcclp <xm_zzc@sina.com> |
| 2015-03-11 12:22:24 +0000 |
| Commit: ec30c17, github.com/apache/spark/pull/4979 |
| |
| [SQL][Minor] fix typo in comments |
| Hongbo Liu <liuhb86@gmail.com> |
| 2015-03-11 12:18:24 +0000 |
| Commit: 40f4979, github.com/apache/spark/pull/4976 |
| |
| [MINOR] [DOCS] Fix map -> mapToPair in Streaming Java example |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-11 12:16:32 +0000 |
| Commit: 35b2564, github.com/apache/spark/pull/4967 |
| |
| [SPARK-4924] Add a library for launching Spark jobs programmatically. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-11 01:03:01 -0700 |
| Commit: 517975d, github.com/apache/spark/pull/3916 |
| |
| [SPARK-5986][MLLib] Add save/load for k-means |
| Xusen Yin <yinxusen@gmail.com> |
| 2015-03-11 00:24:55 -0700 |
| Commit: 2d4e00e, github.com/apache/spark/pull/4951 |
| |
| [SPARK-5183][SQL] Update SQL Docs with JDBC and Migration Guide |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-10 18:13:09 -0700 |
| Commit: 2672374, github.com/apache/spark/pull/4958 |
| |
| Minor doc: Remove the extra blank line in data types javadoc. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-10 17:25:04 -0700 |
| Commit: 74fb433, github.com/apache/spark/pull/4955 |
| |
| [SPARK-6186] [EC2] Make Tachyon version configurable in EC2 deployment script |
| cheng chang <myairia@gmail.com> |
| 2015-03-10 11:02:12 +0000 |
| Commit: 7c7d2d5, github.com/apache/spark/pull/4901 |
| |
| [SPARK-6191] [EC2] Generalize ability to download libs |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-03-10 10:58:31 +0000 |
| Commit: d14df06, github.com/apache/spark/pull/4919 |
| |
| [SPARK-6087][CORE] Provide actionable exception if Kryo buffer is not large enough |
| Lev Khomich <levkhomich@gmail.com> |
| 2015-03-10 10:55:42 +0000 |
| Commit: c4c4b07, github.com/apache/spark/pull/4947 |
| |
| [SPARK-6177][MLlib]Add note in LDA example to remind possible coalesce |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-03-10 10:51:44 +0000 |
| Commit: 9a0272f, github.com/apache/spark/pull/4899 |
| |
| [SPARK-6194] [SPARK-677] [PySpark] fix memory leak in collect() |
| Davies Liu <davies@databricks.com> |
| 2015-03-09 16:24:06 -0700 |
| Commit: 8767565, github.com/apache/spark/pull/4923 |
| |
| [SPARK-5310][Doc] Update SQL Programming Guide to include DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-09 16:16:16 -0700 |
| Commit: 3cac199, github.com/apache/spark/pull/4954 |
| |
| [Docs] Replace references to SchemaRDD with DataFrame |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-09 13:29:19 -0700 |
| Commit: 70f8814, github.com/apache/spark/pull/4952 |
| |
| [EC2] [SPARK-6188] Instance types can be mislabeled when re-starting cluster with default arguments |
| Theodore Vasiloudis <thvasilo@users.noreply.github.com>, Theodore Vasiloudis <tvas@sics.se> |
| 2015-03-09 14:16:07 +0000 |
| Commit: f7c7992, github.com/apache/spark/pull/4916 |
| |
| [GraphX] Improve LiveJournalPageRank example |
| Jacky Li <jacky.likun@huawei.com> |
| 2015-03-08 19:47:35 +0000 |
| Commit: 55b1b32, github.com/apache/spark/pull/4917 |
| |
| SPARK-6205 [CORE] UISeleniumSuite fails for Hadoop 2.x test with NoClassDefFoundError |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-08 14:09:40 +0000 |
| Commit: f16b7b0, github.com/apache/spark/pull/4933 |
| |
| [SPARK-6193] [EC2] Push group filter up to EC2 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-03-08 14:01:26 +0000 |
| Commit: 52ed7da, github.com/apache/spark/pull/4922 |
| |
| [SPARK-5641] [EC2] Allow spark_ec2.py to copy arbitrary files to cluster |
| Florian Verhein <florian.verhein@gmail.com> |
| 2015-03-07 12:56:59 +0000 |
| Commit: 334c5bd, github.com/apache/spark/pull/4583 |
| |
| [Minor]fix the wrong description |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-03-07 12:35:26 +0000 |
| Commit: 729c05b, github.com/apache/spark/pull/4936 |
| |
| [EC2] Reorder print statements on termination |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-03-07 12:33:41 +0000 |
| Commit: 2646794, github.com/apache/spark/pull/4932 |
| |
| Fix python typo (+ Scala, Java typos) |
| RobertZK <technoguyrob@gmail.com>, Robert Krzyzanowski <technoguyrob@gmail.com> |
| 2015-03-07 00:16:50 +0000 |
| Commit: 48a723c, github.com/apache/spark/pull/4840 |
| |
| [SPARK-6178][Shuffle] Removed unused imports |
| Vinod K C <vinod.kchuawei.com>, Vinod K C <vinod.kc@huawei.com> |
| 2015-03-06 14:43:09 +0000 |
| Commit: dba0b2e, github.com/apache/spark/pull/4900 |
| |
| [Minor] Resolve sbt warnings: postfix operator second should be enabled |
| GuoQiang Li <witgo@qq.com> |
| 2015-03-06 13:20:20 +0000 |
| Commit: 05cb6b3, github.com/apache/spark/pull/4908 |
| |
| [core] [minor] Don't pollute source directory when running UtilsSuite. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-06 09:43:24 +0000 |
| Commit: cd7594c, github.com/apache/spark/pull/4921 |
| |
| [CORE, DEPLOY][minor] align arguments order with docs of worker |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-03-06 09:34:07 +0000 |
| Commit: d8b3da9, github.com/apache/spark/pull/4924 |
| |
| [SQL] Make Strategies a public developer API |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-05 14:50:25 -0800 |
| Commit: eb48fd6, github.com/apache/spark/pull/4920 |
| |
| [SPARK-6163][SQL] jsonFile should be backed by the data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-05 14:49:44 -0800 |
| Commit: 1b4bb25, github.com/apache/spark/pull/4896 |
| |
| [SPARK-6145][SQL] fix ORDER BY on nested fields |
| Wenchen Fan <cloud0fan@outlook.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-05 14:49:01 -0800 |
| Commit: 5873c71, github.com/apache/spark/pull/4918 |
| |
| [SPARK-6175] Fix standalone executor log links when ephemeral ports or SPARK_PUBLIC_DNS are used |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-05 12:04:00 -0800 |
| Commit: 424a86a, github.com/apache/spark/pull/4903 |
| |
| [SPARK-6090][MLLIB] add a basic BinaryClassificationMetrics to PySpark/MLlib |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-05 11:50:09 -0800 |
| Commit: 0bfacd5, github.com/apache/spark/pull/4863 |
| |
| SPARK-6182 [BUILD] spark-parent pom needs to be published for both 2.10 and 2.11 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-05 11:31:48 -0800 |
| Commit: c9cfba0, github.com/apache/spark/pull/4912 |
| |
| [SPARK-6153] [SQL] promote guava dep for hive-thriftserver |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-05 16:35:17 +0800 |
| Commit: e06c7df, github.com/apache/spark/pull/4884 |
| |
| SPARK-5143 [BUILD] [WIP] spark-network-yarn 2.11 depends on spark-network-shuffle 2.10 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-04 21:00:51 -0800 |
| Commit: 7ac072f, github.com/apache/spark/pull/4876 |
| |
| [SPARK-6149] [SQL] [Build] Excludes Guava 15 referenced by jackson-module-scala_2.10 |
| Cheng Lian <lian@databricks.com> |
| 2015-03-04 20:52:58 -0800 |
| Commit: 1aa90e3, github.com/apache/spark/pull/4890 |
| |
| [SPARK-6144] [core] Fix addFile when source files are on "hdfs:" |
| Marcelo Vanzin <vanzin@cloudera.com>, trystanleftwich <trystan@atscale.com> |
| 2015-03-04 12:58:39 -0800 |
| Commit: 3a35a0d, github.com/apache/spark/pull/4894 |
| |
| [SPARK-6107][CORE] Display inprogress application information for event log history for standalone mode |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-03-04 12:28:27 +0000 |
| Commit: f6773ed, github.com/apache/spark/pull/4848 |
| |
| [SPARK-6134][SQL] Fix wrong datatype for casting FloatType and default LongType value in defaultPrimitive |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-04 20:23:43 +0800 |
| Commit: aef8a84, github.com/apache/spark/pull/4870 |
| |
| [SPARK-6136] [SQL] Removed JDBC integration tests which depends on docker-client |
| Cheng Lian <lian@databricks.com> |
| 2015-03-04 19:39:02 +0800 |
| Commit: 76b472f, github.com/apache/spark/pull/4872 |
| |
| [SPARK-3355][Core]: Allow running maven tests in run-tests |
| Brennon York <brennon.york@capitalone.com> |
| 2015-03-04 11:02:33 +0000 |
| Commit: 418f38d, github.com/apache/spark/pull/4734 |
| |
| SPARK-6085 Increase default value for memory overhead |
| tedyu <yuzhihong@gmail.com> |
| 2015-03-04 11:00:52 +0000 |
| Commit: 8d3e241, github.com/apache/spark/pull/4836 |
| |
| [SPARK-6141][MLlib] Upgrade Breeze from 0.10 to 0.11 to fix convergence bug |
| Xiangrui Meng <meng@databricks.com>, DB Tsai <dbtsai@alpinenow.com>, DB Tsai <dbtsai@dbtsai.com> |
| 2015-03-03 23:52:02 -0800 |
| Commit: 76e20a0, github.com/apache/spark/pull/4879 |
| |
| [SPARK-6132][HOTFIX] ContextCleaner InterruptedException should be quiet |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 20:49:45 -0800 |
| Commit: d334bfb, github.com/apache/spark/pull/4882 |
| |
| [SPARK-5949] HighlyCompressedMapStatus needs more classes registered w/ kryo |
| Imran Rashid <irashid@cloudera.com> |
| 2015-03-03 15:33:19 -0800 |
| Commit: 1f1fccc, github.com/apache/spark/pull/4877 |
| |
| [SPARK-6133] Make sc.stop() idempotent |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 15:09:57 -0800 |
| Commit: 6c20f35, github.com/apache/spark/pull/4871 |
| |
| [SPARK-6132] ContextCleaner race condition across SparkContexts |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 13:44:05 -0800 |
| Commit: fe63e82, github.com/apache/spark/pull/4869 |
| |
| SPARK-1911 [DOCS] Warn users if their assembly jars are not built with Java 6 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-03 13:40:11 -0800 |
| Commit: e750a6b, github.com/apache/spark/pull/4874 |
| |
| Revert "[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure deleting the temp file" |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 13:03:52 -0800 |
| Commit: 9af0017 |
| |
| [SPARK-6138][CORE][minor] enhance the `toArray` method in `SizeTrackingVector` |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-03-03 12:12:23 +0000 |
| Commit: e359794, github.com/apache/spark/pull/4825 |
| |
| [SPARK-6118] making package name of deploy.worker.CommandUtils and deploy.CommandUtilsSuite consistent |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-03 10:32:57 +0000 |
| Commit: 975643c, github.com/apache/spark/pull/4856 |
| |
| BUILD: Minor tweaks to internal build scripts |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-03 00:38:12 -0800 |
| Commit: 0c9a8ea |
| |
| HOTFIX: Bump HBase version in MapR profiles. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-03 01:38:07 -0800 |
| Commit: 165ff36 |
| |
| [SPARK-5537][MLlib][Docs] Add user guide for multinomial logistic regression |
| DB Tsai <dbtsai@alpinenow.com> |
| 2015-03-02 22:37:12 -0800 |
| Commit: b196056, github.com/apache/spark/pull/4866 |
| |
| [SPARK-6120] [mllib] Warnings about memory in tree, ensemble model save |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-03-02 22:33:51 -0800 |
| Commit: c2fe3a6, github.com/apache/spark/pull/4864 |
| |
| [SPARK-6097][MLLIB] Support tree model save/load in PySpark/MLlib |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-02 22:27:01 -0800 |
| Commit: 7e53a79, github.com/apache/spark/pull/4854 |
| |
| [SPARK-5310][SQL] Fixes to Docs and Datasources API |
| Reynold Xin <rxin@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-02 22:14:08 -0800 |
| Commit: 54d1968, github.com/apache/spark/pull/4868 |
| |
| [SPARK-5950][SQL]Insert array into a metastore table saved as parquet should work when using datasource api |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 19:31:55 -0800 |
| Commit: 1259994, github.com/apache/spark/pull/4826 |
| |
| [SPARK-6127][Streaming][Docs] Add Kafka to Python api docs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-02 18:40:46 -0800 |
| Commit: 9eb22ec, github.com/apache/spark/pull/4860 |
| |
| [SPARK-5537] Add user guide for multinomial logistic regression |
| Xiangrui Meng <meng@databricks.com>, DB Tsai <dbtsai@alpinenow.com> |
| 2015-03-02 18:10:50 -0800 |
| Commit: 9d6c5ae, github.com/apache/spark/pull/4801 |
| |
| [SPARK-6121][SQL][MLLIB] simpleString for UDT |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-02 17:14:34 -0800 |
| Commit: 2db6a85, github.com/apache/spark/pull/4858 |
| |
| [SPARK-4777][CORE] Some block memory after unrollSafely not count into used memory(memoryStore.entrys or unrollMemory) |
| hushan[č”ē] <hushan@xiaomi.com> |
| 2015-03-02 16:53:54 -0800 |
| Commit: e3a88d1, github.com/apache/spark/pull/3629 |
| |
| [SPARK-6048] SparkConf should not translate deprecated configs on set |
| Andrew Or <andrew@databricks.com> |
| 2015-03-02 16:36:42 -0800 |
| Commit: 258d154, github.com/apache/spark/pull/4799 |
| |
| [SPARK-6066] Make event log format easier to parse |
| Andrew Or <andrew@databricks.com> |
| 2015-03-02 16:34:32 -0800 |
| Commit: 6776cb3, github.com/apache/spark/pull/4821 |
| |
| [SPARK-6082] [SQL] Provides better error message for malformed rows when caching tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-02 16:18:00 -0800 |
| Commit: 1a49496, github.com/apache/spark/pull/4842 |
| |
| [SPARK-6114][SQL] Avoid metastore conversions before plan is resolved |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-02 16:10:54 -0800 |
| Commit: 8223ce6, github.com/apache/spark/pull/4855 |
| |
| [SPARK-5522] Accelerate the Histroty Server start |
| guliangliang <guliangliang@qiyi.com> |
| 2015-03-02 15:33:23 -0800 |
| Commit: 26c1c56, github.com/apache/spark/pull/4525 |
| |
| [SPARK-6050] [yarn] Relax matching of vcore count in received containers. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-02 16:41:43 -0600 |
| Commit: 6b348d9, github.com/apache/spark/pull/4818 |
| |
| [SPARK-6040][SQL] Fix the percent bug in tablesample |
| q00251598 <qiyadong@huawei.com> |
| 2015-03-02 13:16:29 -0800 |
| Commit: 582e5a2, github.com/apache/spark/pull/4789 |
| |
| [Minor] Fix doc typo for describing primitiveTerm effectiveness condition |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-02 13:11:17 -0800 |
| Commit: 3f9def8, github.com/apache/spark/pull/4762 |
| |
| SPARK-5390 [DOCS] Encourage users to post on Stack Overflow in Community Docs |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-02 21:10:08 +0000 |
| Commit: 0b472f6, github.com/apache/spark/pull/4843 |
| |
| [DOCS] Refactored Dataframe join comment to use correct parameter ordering |
| Paul Power <paul.power@peerside.com> |
| 2015-03-02 13:08:47 -0800 |
| Commit: d9a8bae, github.com/apache/spark/pull/4847 |
| |
| [SPARK-6080] [PySpark] correct LogisticRegressionWithLBFGS regType parameter for pyspark |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-02 10:17:24 -0800 |
| Commit: af2effd, github.com/apache/spark/pull/4831 |
| |
| aggregateMessages example in graphX doc |
| DEBORAH SIEGEL <deborahsiegel@DEBORAHs-MacBook-Pro.local> |
| 2015-03-02 10:15:32 -0800 |
| Commit: e7d8ae4, github.com/apache/spark/pull/4853 |
| |
| [SPARK-5741][SQL] Support the path contains comma in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-03-02 10:13:11 -0800 |
| Commit: 9ce12aa, github.com/apache/spark/pull/4532 |
| |
| [SPARK-6111] Fixed usage string in documentation. |
| Kenneth Myers <myerske@us.ibm.com> |
| 2015-03-02 17:25:24 +0000 |
| Commit: 95ac68b, github.com/apache/spark/pull/4852 |
| |
| [SPARK-6052][SQL]In JSON schema inference, we should always set containsNull of an ArrayType to true |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 23:18:07 +0800 |
| Commit: 3efd8bb, github.com/apache/spark/pull/4806 |
| |
| [SPARK-6073][SQL] Need to refresh metastore cache after append data in CreateMetastoreDataSourceAsSelect |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 22:42:18 +0800 |
| Commit: 39a54b4, github.com/apache/spark/pull/4824 |
| |
| [SPARK-6103][Graphx]remove unused class to import in EdgeRDDImpl |
| Lianhui Wang <lianhuiwang09@gmail.com> |
| 2015-03-02 09:06:56 +0000 |
| Commit: 49c7a8f, github.com/apache/spark/pull/4846 |
| |
| SPARK-3357 [CORE] Internal log messages should be set at DEBUG level instead of INFO |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-02 08:51:03 +0000 |
| Commit: 948c239, github.com/apache/spark/pull/4838 |
| |
| [Streaming][Minor]Fix some error docs in streaming examples |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-03-02 08:49:19 +0000 |
| Commit: d8fb40e, github.com/apache/spark/pull/4837 |
| |
| [SPARK-6083] [MLLib] [DOC] Make Python API example consistent in NaiveBayes |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-01 16:28:15 -0800 |
| Commit: 3f00bb3, github.com/apache/spark/pull/4834 |
| |
| [SPARK-6053][MLLIB] support save/load in PySpark's ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-01 16:26:57 -0800 |
| Commit: aedbbaa, github.com/apache/spark/pull/4811 |
| |
| [SPARK-6074] [sql] Package pyspark sql bindings. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-01 11:05:10 +0000 |
| Commit: fd8d283, github.com/apache/spark/pull/4822 |
| |
| [SPARK-6075] Fix bug in that caused lost accumulator updates: do not store WeakReferences in localAccums map |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-28 22:51:01 -0800 |
| Commit: 2df5f1f, github.com/apache/spark/pull/4835 |
| |
| SPARK-5984: Fix TimSort bug causes ArrayOutOfBoundsException |
| Evan Yu <ehotou@gmail.com> |
| 2015-02-28 18:55:34 -0800 |
| Commit: 643300a, github.com/apache/spark/pull/4804 |
| |
| SPARK-1965 [WEBUI] Spark UI throws NPE on trying to load the app page for non-existent app |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-28 15:34:08 +0000 |
| Commit: 86fcdae, github.com/apache/spark/pull/4777 |
| |
| SPARK-5983 [WEBUI] Don't respond to HTTP TRACE in HTTP-based UIs |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-28 15:23:59 +0000 |
| Commit: f91298e, github.com/apache/spark/pull/4765 |
| |
| SPARK-6063 MLlib doesn't pass mvn scalastyle check due to UTF chars in LDAModel.scala |
| Michael Griffiths <msjgriffiths@gmail.com>, Griffiths, Michael (NYC-RPM) <michael.griffiths@reprisemedia.com> |
| 2015-02-28 14:47:39 +0000 |
| Commit: b36b1bc, github.com/apache/spark/pull/4815 |
| |
| [SPARK-5775] [SQL] BugFix: GenericRow cannot be cast to SpecificMutableRow when nested data and partitioned table |
| Cheng Lian <lian@databricks.com>, Cheng Lian <liancheng@users.noreply.github.com>, Yin Huai <yhuai@databricks.com> |
| 2015-02-28 21:15:43 +0800 |
| Commit: e6003f0, github.com/apache/spark/pull/4792 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-27 23:10:09 -0800 |
| Commit: 9168259, github.com/apache/spark/pull/1128 |
| |
| [SPARK-5979][SPARK-6032] Smaller safer --packages fix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-27 22:59:35 -0800 |
| Commit: 6d8e5fb, github.com/apache/spark/pull/4802 |
| |
| [SPARK-6070] [yarn] Remove unneeded classes from shuffle service jar. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-27 22:44:11 -0800 |
| Commit: dba08d1, github.com/apache/spark/pull/4820 |
| |
| [SPARK-6055] [PySpark] fix incorrect __eq__ of DataType |
| Davies Liu <davies@databricks.com> |
| 2015-02-27 20:07:17 -0800 |
| Commit: e0e64ba, github.com/apache/spark/pull/4808 |
| |
| [SPARK-5751] [SQL] Sets SPARK_HOME as SPARK_PID_DIR when running Thrift server test suites |
| Cheng Lian <lian@databricks.com> |
| 2015-02-28 08:41:49 +0800 |
| Commit: 8c468a6, github.com/apache/spark/pull/4758 |
| |
| [Streaming][Minor] Remove useless type signature of Java Kafka direct stream API |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-02-27 13:01:42 -0800 |
| Commit: 5f7f3b9, github.com/apache/spark/pull/4817 |
| |
| [SPARK-4587] [mllib] [docs] Fixed save,load calls in ML guide examples |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-27 13:00:36 -0800 |
| Commit: d17cb2b, github.com/apache/spark/pull/4816 |
| |
| [SPARK-6059][Yarn] Add volatile to ApplicationMaster's reporterThread and allocator |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-27 13:33:39 +0000 |
| Commit: 57566d0, github.com/apache/spark/pull/4814 |
| |
| [SPARK-6058][Yarn] Log the user class exception in ApplicationMaster |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-27 13:31:46 +0000 |
| Commit: e747e98, github.com/apache/spark/pull/4813 |
| |
| [SPARK-6036][CORE] avoid race condition between eventlogListener and akka actor system |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-02-26 23:11:43 -0800 |
| Commit: 8cd1692, github.com/apache/spark/pull/4785 |
| |
| fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode |
| č®øé¹ <peng.xu@fraudmetrix.cn> |
| 2015-02-26 23:05:56 -0800 |
| Commit: 0375a41, github.com/apache/spark/pull/4803 |
| |
| [SPARK-6046] Privatize SparkConf.translateConfKey |
| Andrew Or <andrew@databricks.com> |
| 2015-02-26 22:39:46 -0800 |
| Commit: 7c99a01, github.com/apache/spark/pull/4797 |
| |
| SPARK-2168 [Spark core] Use relative URIs for the app links in the History Server. |
| Lukasz Jastrzebski <lukasz.jastrzebski@gmail.com> |
| 2015-02-26 22:38:06 -0800 |
| Commit: 4a8a0a8, github.com/apache/spark/pull/4778 |
| |
| [SPARK-5495][UI] Add app and driver kill function in master web UI |
| jerryshao <saisai.shao@intel.com> |
| 2015-02-26 22:36:48 -0800 |
| Commit: 67595eb, github.com/apache/spark/pull/4288 |
| |
| [SPARK-5771][UI][hotfix] Change Requested Cores into * if default cores is not set |
| jerryshao <saisai.shao@intel.com> |
| 2015-02-26 22:35:43 -0800 |
| Commit: 12135e9, github.com/apache/spark/pull/4800 |
| |
| [SPARK-6024][SQL] When a data source table has too many columns, it's schema cannot be stored in metastore. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-26 20:46:05 -0800 |
| Commit: 5e5ad65, github.com/apache/spark/pull/4795 |
| |
| [SPARK-6037][SQL] Avoiding duplicate Parquet schema merging |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-27 11:06:47 +0800 |
| Commit: 4ad5153, github.com/apache/spark/pull/4786 |
| |
| [SPARK-5529][CORE]Add expireDeadHosts in HeartbeatReceiver |
| Hong Shen <hongshen@tencent.com> |
| 2015-02-26 18:43:23 -0800 |
| Commit: 18f2098, github.com/apache/spark/pull/4363 |
| |
| SPARK-4579 [WEBUI] Scheduling Delay appears negative |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 17:35:09 -0800 |
| Commit: fbc4694, github.com/apache/spark/pull/4796 |
| |
| SPARK-6045 RecordWriter should be checked against null in PairRDDFunctio... |
| tedyu <yuzhihong@gmail.com> |
| 2015-02-26 23:26:07 +0000 |
| Commit: e60ad2f, github.com/apache/spark/pull/4794 |
| |
| [SPARK-5951][YARN] Remove unreachable driver memory properties in yarn client mode |
| mohit.goyal <mohit.goyal@guavus.com> |
| 2015-02-26 14:27:47 -0800 |
| Commit: b38dec2, github.com/apache/spark/pull/4730 |
| |
| Add a note for context termination for History server on Yarn |
| moussa taifi <moutai10@gmail.com> |
| 2015-02-26 14:19:43 -0800 |
| Commit: c871e2d, github.com/apache/spark/pull/4721 |
| |
| SPARK-4300 [CORE] Race condition during SparkWorker shutdown |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 14:08:56 -0800 |
| Commit: 3fb53c0, github.com/apache/spark/pull/4787 |
| |
| [SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AM |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-02-26 13:53:49 -0800 |
| Commit: 5f3238b, github.com/apache/spark/pull/4773 |
| |
| [SPARK-6027][SPARK-5546] Fixed --jar and --packages not working for KafkaUtils and improved error message |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-26 13:46:07 -0800 |
| Commit: aa63f63, github.com/apache/spark/pull/4779 |
| |
| [SPARK-3562]Periodic cleanup event logs |
| xukun 00228947 <xukun.xu@huawei.com> |
| 2015-02-26 13:24:00 -0800 |
| Commit: 8942b52, github.com/apache/spark/pull/4214 |
| |
| Modify default value description for spark.scheduler.minRegisteredResourcesRatio on docs. |
| Li Zhihui <zhihui.li@intel.com> |
| 2015-02-26 13:07:07 -0800 |
| Commit: 10094a5, github.com/apache/spark/pull/4781 |
| |
| SPARK-4704 [CORE] SparkSubmitDriverBootstrap doesn't flush output |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 12:56:54 -0800 |
| Commit: cd5c8d7, github.com/apache/spark/pull/4788 |
| |
| [SPARK-5363] Fix bug in PythonRDD: remove() inside iterator is not safe |
| Davies Liu <davies@databricks.com> |
| 2015-02-26 11:54:17 -0800 |
| Commit: 7fa960e, github.com/apache/spark/pull/4776 |
| |
| [SPARK-6004][MLlib] Pick the best model when training GradientBoostedTrees with validation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-26 10:51:47 -0800 |
| Commit: cfff397, github.com/apache/spark/pull/4763 |
| |
| [SPARK-6007][SQL] Add numRows param in DataFrame.show() |
| Jacky Li <jacky.likun@huawei.com> |
| 2015-02-26 10:40:58 -0800 |
| Commit: 2358657, github.com/apache/spark/pull/4767 |
| |
| [SPARK-5801] [core] Avoid creating nested directories. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-26 17:35:03 +0000 |
| Commit: df3d559, github.com/apache/spark/pull/4747 |
| |
| [SPARK-6016][SQL] Cannot read the parquet table after overwriting the existing table when spark.sql.parquet.cacheMetadata=true |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-27 01:01:32 +0800 |
| Commit: 192e42a, github.com/apache/spark/pull/4775 |
| |
| [SPARK-6023][SQL] ParquetConversions fails to replace the destination MetastoreRelation of an InsertIntoTable node to ParquetRelation2 |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-26 22:39:49 +0800 |
| Commit: f02394d, github.com/apache/spark/pull/4782 |
| |
| [SPARK-5914] to run spark-submit requiring only user perm on windows |
| Judy Nash <judynash@microsoft.com> |
| 2015-02-26 11:14:37 +0000 |
| Commit: 51a6f90, github.com/apache/spark/pull/4742 |
| |
| [SPARK-5976][MLLIB] Add partitioner to factors returned by ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-25 23:43:29 -0800 |
| Commit: e43139f, github.com/apache/spark/pull/4748 |
| |
| [SPARK-5974] [SPARK-5980] [mllib] [python] [docs] Update ML guide with save/load, Python GBT |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-25 16:13:17 -0800 |
| Commit: d20559b, github.com/apache/spark/pull/4750 |
| |
| [SPARK-1182][Docs] Sort the configuration parameters in configuration.md |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-25 16:12:56 -0800 |
| Commit: 46a044a, github.com/apache/spark/pull/3863 |
| |
| [SPARK-5926] [SQL] make DataFrame.explain leverage queryExecution.logical |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-02-25 15:37:13 -0800 |
| Commit: 41e2e5a, github.com/apache/spark/pull/4707 |
| |
| [SPARK-5999][SQL] Remove duplicate Literal matching block |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-25 15:22:33 -0800 |
| Commit: 12dbf98, github.com/apache/spark/pull/4760 |
| |
| [SPARK-6010] [SQL] Merging compatible Parquet schemas before computing splits |
| Cheng Lian <lian@databricks.com> |
| 2015-02-25 15:15:22 -0800 |
| Commit: e0fdd46, github.com/apache/spark/pull/4768 |
| |
| [SPARK-5944] [PySpark] fix version in Python API docs |
| Davies Liu <davies@databricks.com> |
| 2015-02-25 15:13:34 -0800 |
| Commit: f3f4c87, github.com/apache/spark/pull/4731 |
| |
| [SPARK-5982] Remove incorrect Local Read Time Metric |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-25 14:55:24 -0800 |
| Commit: 838a480, github.com/apache/spark/pull/4749 |
| |
| [SPARK-1955][GraphX]: VertexRDD can incorrectly assume index sharing |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-25 14:11:12 -0800 |
| Commit: 9f603fc, github.com/apache/spark/pull/4705 |
| |
| [SPARK-5970][core] Register directory created in getOrCreateLocalRootDirs for automatic deletion. |
| Milan Straka <fox@ucw.cz> |
| 2015-02-25 21:33:34 +0000 |
| Commit: a777c65, github.com/apache/spark/pull/4759 |
| |
| SPARK-5930 [DOCS] Documented default of spark.shuffle.io.retryWait is confusing |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-25 12:20:44 -0800 |
| Commit: 7d8e6a2, github.com/apache/spark/pull/4769 |
| |
| [SPARK-5996][SQL] Fix specialized outbound conversions |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-25 10:13:40 -0800 |
| Commit: f84c799, github.com/apache/spark/pull/4757 |
| |
| [SPARK-5771] Number of Cores in Completed Applications of Standalone Master Web Page always be 0 if sc.stop() is called |
| guliangliang <guliangliang@qiyi.com> |
| 2015-02-25 14:48:02 +0000 |
| Commit: dd077ab, github.com/apache/spark/pull/4567 |
| |
| [GraphX] fixing 3 typos in the graphx programming guide |
| Benedikt Linse <benedikt.linse@gmail.com> |
| 2015-02-25 14:46:17 +0000 |
| Commit: 5b8480e, github.com/apache/spark/pull/4766 |
| |
| [SPARK-5666][streaming][MQTT streaming] some trivial fixes |
| prabs <prabsmails@gmail.com>, Prabeesh K <prabsmails@gmail.com> |
| 2015-02-25 14:37:35 +0000 |
| Commit: d51ed26, github.com/apache/spark/pull/4178 |
| |
| [SPARK-5994] [SQL] Python DataFrame documentation fixes |
| Davies Liu <davies@databricks.com> |
| 2015-02-24 20:51:55 -0800 |
| Commit: d641fbb, github.com/apache/spark/pull/4756 |
| |
| [SPARK-5286][SQL] SPARK-5286 followup |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-24 19:51:36 -0800 |
| Commit: 769e092, github.com/apache/spark/pull/4755 |
| |
| [SPARK-5993][Streaming][Build] Fix assembly jar location of kafka-assembly |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-24 19:10:37 -0800 |
| Commit: 922b43b, github.com/apache/spark/pull/4753 |
| |
| [SPARK-5985][SQL] DataFrame sortBy -> orderBy in Python. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-24 18:59:23 -0800 |
| Commit: fba11c2, github.com/apache/spark/pull/4752 |
| |
| [SPARK-5904][SQL] DataFrame Java API test suites. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-24 18:51:41 -0800 |
| Commit: 53a1ebf, github.com/apache/spark/pull/4751 |
| |
| [SPARK-5751] [SQL] [WIP] Revamped HiveThriftServer2Suite for robustness |
| Cheng Lian <lian@databricks.com> |
| 2015-02-25 08:34:55 +0800 |
| Commit: f816e73, github.com/apache/spark/pull/4720 |
| |
| [SPARK-5436] [MLlib] Validate GradientBoostedTrees using runWithValidation |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-24 15:13:22 -0800 |
| Commit: 2a0fe34, github.com/apache/spark/pull/4677 |
| |
| [SPARK-5973] [PySpark] fix zip with two RDDs with AutoBatchedSerializer |
| Davies Liu <davies@databricks.com> |
| 2015-02-24 14:50:00 -0800 |
| Commit: da505e5, github.com/apache/spark/pull/4745 |
| |
| [SPARK-5952][SQL] Lock when using hive metastore client |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 13:39:29 -0800 |
| Commit: a2b9137, github.com/apache/spark/pull/4746 |
| |
| [Spark-5708] Add Slf4jSink to Spark Metrics |
| Judy <judynash@microsoft.com>, judynash <judynash@microsoft.com> |
| 2015-02-24 20:50:16 +0000 |
| Commit: c5ba975, github.com/apache/spark/pull/4644 |
| |
| [MLLIB] Change x_i to y_i in Variance's user guide |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-24 11:38:59 -0800 |
| Commit: 105791e, github.com/apache/spark/pull/4740 |
| |
| [SPARK-5965] Standalone Worker UI displays {{USER_JAR}} |
| Andrew Or <andrew@databricks.com> |
| 2015-02-24 11:08:07 -0800 |
| Commit: 6d2caa5, github.com/apache/spark/pull/4739 |
| |
| [Spark-5967] [UI] Correctly clean JobProgressListener.stageIdToActiveJobIds |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-24 11:02:47 -0800 |
| Commit: 64d2c01, github.com/apache/spark/pull/4741 |
| |
| [SPARK-5532][SQL] Repartition should not use external rdd representation |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 10:52:18 -0800 |
| Commit: 2012366, github.com/apache/spark/pull/4738 |
| |
| [SPARK-5910][SQL] Support for as in selectExpr |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 10:49:51 -0800 |
| Commit: 0a59e45, github.com/apache/spark/pull/4736 |
| |
| [SPARK-5968] [SQL] Suppresses ParquetOutputCommitter WARN logs |
| Cheng Lian <lian@databricks.com> |
| 2015-02-24 10:45:38 -0800 |
| Commit: 8403331, github.com/apache/spark/pull/4744 |
| |
| [SPARK-5958][MLLIB][DOC] update block matrix user guide |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-23 22:08:44 -0800 |
| Commit: cf2e416, github.com/apache/spark/pull/4737 |
| |
| [SPARK-5873][SQL] Allow viewing of partially analyzed plans in queryExecution |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-23 17:34:54 -0800 |
| Commit: 1ed5708, github.com/apache/spark/pull/4684 |
| |
| [SPARK-5935][SQL] Accept MapType in the schema provided to a JSON dataset. |
| Yin Huai <yhuai@databricks.com>, Yin Huai <huai@cse.ohio-state.edu> |
| 2015-02-23 17:16:34 -0800 |
| Commit: 48376bf, github.com/apache/spark/pull/4710 |
| |
| [SPARK-5912] [docs] [mllib] Small fixes to ChiSqSelector docs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-23 16:15:57 -0800 |
| Commit: 59536cc, github.com/apache/spark/pull/4732 |
| |
| [MLLIB] SPARK-5912 Programming guide for feature selection |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-02-23 12:09:40 -0800 |
| Commit: 28ccf5e, github.com/apache/spark/pull/4709 |
| |
| [SPARK-5939][MLLib] make FPGrowth example app take parameters |
| Jacky Li <jacky.likun@huawei.com> |
| 2015-02-23 08:47:28 -0800 |
| Commit: 651a1c0, github.com/apache/spark/pull/4714 |
| |
| [SPARK-5724] fix the misconfiguration in AkkaUtils |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-02-23 11:29:25 +0000 |
| Commit: 242d495, github.com/apache/spark/pull/4512 |
| |
| [SPARK-5943][Streaming] Update the test to use new API to reduce the warning |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-02-23 11:27:27 +0000 |
| Commit: 757b14b, github.com/apache/spark/pull/4722 |
| |
| [EXAMPLES] fix typo. |
| Makoto Fukuhara <fukuo33@gmail.com> |
| 2015-02-23 09:24:33 +0000 |
| Commit: 9348767, github.com/apache/spark/pull/4724 |
| |
| [SPARK-3885] Provide mechanism to remove accumulators once they are no longer used |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-02-22 22:43:04 -0800 |
| Commit: 95cd643, github.com/apache/spark/pull/4021 |
| |
| [SPARK-911] allow efficient queries for a range if RDD is partitioned wi... |
| Aaron Josephs <ajoseph4@binghamton.edu> |
| 2015-02-22 22:09:06 -0800 |
| Commit: e4f9d03, github.com/apache/spark/pull/1381 |
| |
| [DataFrame] [Typo] Fix the typo |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-22 08:56:30 +0000 |
| Commit: 275b1be, github.com/apache/spark/pull/4717 |
| |
| [DOCS] Fix typo in API for custom InputFormats based on the ānewā MapReduce API |
| Alexander <abezzubov@nflabs.com> |
| 2015-02-22 08:53:05 +0000 |
| Commit: a7f9039, github.com/apache/spark/pull/4718 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-21 23:07:30 -0800 |
| Commit: 46462ff, github.com/apache/spark/pull/3490 |
| |
| [SPARK-5860][CORE] JdbcRDD: overflow on large range with high number of partitions |
| Evan Yu <ehotou@gmail.com> |
| 2015-02-21 20:40:21 +0000 |
| Commit: 7683982, github.com/apache/spark/pull/4701 |
| |
| [SPARK-5937][YARN] Fix ClientSuite to set YARN mode, so that the correct class is used in t... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-21 10:01:01 -0800 |
| Commit: 7138816, github.com/apache/spark/pull/4711 |
| |
| SPARK-5841 [CORE] [HOTFIX 2] Memory leak in DiskBlockManager |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-02-21 09:59:28 -0800 |
| Commit: d3cbd38, github.com/apache/spark/pull/4690 |
| |
| [MLlib] fix typo |
| Jacky Li <jackylk@users.noreply.github.com> |
| 2015-02-21 13:00:16 +0000 |
| Commit: e155324, github.com/apache/spark/pull/4713 |
| |
| [SPARK-5898] [SPARK-5896] [SQL] [PySpark] create DataFrame from pandas and tuple/list |
| Davies Liu <davies@databricks.com> |
| 2015-02-20 15:35:05 -0800 |
| Commit: 5b0a42c, github.com/apache/spark/pull/4679 |
| |
| [SPARK-5867] [SPARK-5892] [doc] [ml] [mllib] Doc cleanups for 1.3 release |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-20 02:31:32 -0800 |
| Commit: 4a17eed, github.com/apache/spark/pull/4675 |
| |
| SPARK-5744 [CORE] Take 2. RDD.isEmpty / take fails for (empty) RDD of Nothing |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-20 10:21:39 +0000 |
| Commit: d3dfebe, github.com/apache/spark/pull/4698 |
| |
| [SPARK-5909][SQL] Add a clearCache command to Spark SQL's cache manager |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-20 16:20:02 +0800 |
| Commit: 70bfb5c, github.com/apache/spark/pull/4694 |
| |
| [SPARK-4808] Removing minimum number of elements read before spill check |
| mcheah <mcheah@palantir.com> |
| 2015-02-19 18:09:22 -0800 |
| Commit: 3be92cd, github.com/apache/spark/pull/4420 |
| |
| [SPARK-5900][MLLIB] make PIC and FPGrowth Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-19 18:06:16 -0800 |
| Commit: 0cfd2ce, github.com/apache/spark/pull/4695 |
| |
| SPARK-5570: No docs stating that `new SparkConf().set("spark.driver.memory", ...) will not work |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-02-19 15:50:58 -0800 |
| Commit: 6bddc40, github.com/apache/spark/pull/4665 |
| |
| SPARK-4682 [CORE] Consolidate various 'Clock' classes |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-19 15:35:23 -0800 |
| Commit: 34b7c35, github.com/apache/spark/pull/4514 |
| |
| [Spark-5889] Remove pid file after stopping service. |
| Zhan Zhang <zhazhan@gmail.com> |
| 2015-02-19 23:13:02 +0000 |
| Commit: ad6b169, github.com/apache/spark/pull/4676 |
| |
| [SPARK-5902] [ml] Made PipelineStage.transformSchema public instead of private to ml |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-19 12:46:27 -0800 |
| Commit: a5fed34, github.com/apache/spark/pull/4682 |
| |
| [SPARK-5904][SQL] DataFrame API fixes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-19 12:09:44 -0800 |
| Commit: 8ca3418, github.com/apache/spark/pull/4686 |
| |
| [SPARK-5825] [Spark Submit] Remove the double checking instance name when stopping the service |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-19 12:07:51 -0800 |
| Commit: 94cdb05, github.com/apache/spark/pull/4611 |
| |
| [SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure deleting the temp file |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-19 18:37:31 +0000 |
| Commit: 90095bf, github.com/apache/spark/pull/4219 |
| |
| [SPARK-5816] Add huge compatibility warning in DriverWrapper |
| Andrew Or <andrew@databricks.com> |
| 2015-02-19 09:56:25 -0800 |
| Commit: 38e624a, github.com/apache/spark/pull/4687 |
| |
| SPARK-5548: Fix for AkkaUtilsSuite failure - attempt 2 |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-19 09:53:36 -0800 |
| Commit: fb87f44, github.com/apache/spark/pull/4653 |
| |
| [SPARK-5846] Correctly set job description and pool for SQL jobs |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-19 09:49:34 +0800 |
| Commit: e945aa6, github.com/apache/spark/pull/4630 |
| |
| [SPARK-5879][MLLIB] update PIC user guide and add a Java example |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-18 16:29:32 -0800 |
| Commit: d12d2ad, github.com/apache/spark/pull/4680 |
| |
| [SPARK-5722] [SQL] [PySpark] infer int as LongType |
| Davies Liu <davies@databricks.com> |
| 2015-02-18 14:17:04 -0800 |
| Commit: aa8f10e, github.com/apache/spark/pull/4666 |
| |
| [SPARK-5840][SQL] HiveContext cannot be serialized due to tuple extraction |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-18 14:02:32 -0800 |
| Commit: f0e3b71, github.com/apache/spark/pull/4628 |
| |
| [SPARK-5507] Added documentation for BlockMatrix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-18 10:11:08 -0800 |
| Commit: a8eb92d, github.com/apache/spark/pull/4664 |
| |
| [SPARK-5519][MLLIB] add user guide with example code for fp-growth |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-18 10:09:56 -0800 |
| Commit: 85e9d09, github.com/apache/spark/pull/4661 |
| |
| SPARK-5669 [BUILD] [HOTFIX] Spark assembly includes incompatibly licensed libgfortran, libgcc code via JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-18 14:41:44 +0000 |
| Commit: 5aecdcf, github.com/apache/spark/pull/4673 |
| |
| [SPARK-4949]shutdownCallback in SparkDeploySchedulerBackend should be enclosed by synchronized block. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-18 12:20:11 +0000 |
| Commit: 82197ed, github.com/apache/spark/pull/3781 |
| |
| SPARK-4610 addendum: [Minor] [MLlib] Minor doc fix in GBT classification example |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-18 10:13:28 +0000 |
| Commit: e79a7a6, github.com/apache/spark/pull/4672 |
| |
| [SPARK-5878] fix DataFrame.repartition() in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-18 01:00:54 -0800 |
| Commit: c1b6fa9, github.com/apache/spark/pull/4667 |
| |
| Avoid deprecation warnings in JDBCSuite. |
| Tor Myklebust <tmyklebu@gmail.com> |
| 2015-02-18 01:00:13 -0800 |
| Commit: de0dd6d, github.com/apache/spark/pull/4668 |
| |
| [Minor] [SQL] Cleans up DataFrame variable names and toDF() calls |
| Cheng Lian <lian@databricks.com> |
| 2015-02-17 23:36:20 -0800 |
| Commit: 61ab085, github.com/apache/spark/pull/4670 |
| |
| [SPARK-5731][Streaming][Test] Fix incorrect test in DirectKafkaStreamSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-17 22:44:16 -0800 |
| Commit: 3912d33, github.com/apache/spark/pull/4597 |
| |
| [SPARK-5723][SQL]Change the default file format to Parquet for CTAS statements. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-17 18:14:33 -0800 |
| Commit: e50934f, github.com/apache/spark/pull/4639 |
| |
| [SPARK-5875][SQL]logical.Project should not be resolved if it contains aggregates or generators |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-17 17:50:39 -0800 |
| Commit: d5f12bf, github.com/apache/spark/pull/4663 |
| |
| [SPARK-4454] Revert getOrElse() cleanup in DAGScheduler.getCacheLocs() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 17:45:16 -0800 |
| Commit: a51fc7e |
| |
| [SPARK-4454] Properly synchronize accesses to DAGScheduler cacheLocs map |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 17:39:58 -0800 |
| Commit: d46d624, github.com/apache/spark/pull/4660 |
| |
| [SPARK-5811] Added documentation for maven coordinates and added Spark Packages support |
| Burak Yavuz <brkyvz@gmail.com>, Davies Liu <davies@databricks.com> |
| 2015-02-17 17:15:43 -0800 |
| Commit: ae6cfb3, github.com/apache/spark/pull/4662 |
| |
| [SPARK-5785] [PySpark] narrow dependency for cogroup/join in PySpark |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 16:54:57 -0800 |
| Commit: c3d2b90, github.com/apache/spark/pull/4629 |
| |
| [SPARK-5852][SQL]Fail to convert a newly created empty metastore parquet table to a data source parquet table. |
| Yin Huai <yhuai@databricks.com>, Cheng Hao <hao.cheng@intel.com> |
| 2015-02-17 15:47:59 -0800 |
| Commit: 117121a, github.com/apache/spark/pull/4655 |
| |
| [SPARK-5872] [SQL] create a sqlCtx in pyspark shell |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 15:44:37 -0800 |
| Commit: 4d4cc76, github.com/apache/spark/pull/4659 |
| |
| [SPARK-5871] output explain in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 13:48:38 -0800 |
| Commit: 3df85dc, github.com/apache/spark/pull/4658 |
| |
| [SPARK-4172] [PySpark] Progress API in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 13:36:43 -0800 |
| Commit: 445a755, github.com/apache/spark/pull/3027 |
| |
| [SPARK-5868][SQL] Fix python UDFs in HiveContext and checks in SQLContext |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-17 13:23:45 -0800 |
| Commit: de4836f, github.com/apache/spark/pull/4657 |
| |
| [SQL] [Minor] Update the HiveContext Unittest |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-17 12:25:35 -0800 |
| Commit: 9d281fa, github.com/apache/spark/pull/4584 |
| |
| [Minor][SQL] Use same function to check path parameter in JSONRelation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-17 12:24:13 -0800 |
| Commit: ac506b7, github.com/apache/spark/pull/4649 |
| |
| [SPARK-5862][SQL] Only transformUp the given plan once in HiveMetastoreCatalog |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-17 12:23:18 -0800 |
| Commit: 4611de1, github.com/apache/spark/pull/4651 |
| |
| [Minor] fix typo in SQL document |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-02-17 12:16:52 -0800 |
| Commit: 31efb39, github.com/apache/spark/pull/4656 |
| |
| [SPARK-5864] [PySpark] support .jar as python package |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 12:05:06 -0800 |
| Commit: fc4eb95, github.com/apache/spark/pull/4652 |
| |
| SPARK-5841 [CORE] [HOTFIX] Memory leak in DiskBlockManager |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-17 19:40:06 +0000 |
| Commit: 49c19fd, github.com/apache/spark/pull/4648 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-17 11:35:26 -0800 |
| Commit: 24f358b, github.com/apache/spark/pull/3297 |
| |
| [SPARK-3381] [MLlib] Eliminate bins for unordered features in DecisionTrees |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-17 11:19:23 -0800 |
| Commit: 9b746f3, github.com/apache/spark/pull/4231 |
| |
| [SPARK-5661]function hasShutdownDeleteTachyonDir should use shutdownDeleteTachyonPaths to determine whether contains file |
| xukun 00228947 <xukun.xu@huawei.com>, viper-kun <xukun.xu@huawei.com> |
| 2015-02-17 18:59:41 +0000 |
| Commit: b271c26, github.com/apache/spark/pull/4418 |
| |
| [SPARK-5778] throw if nonexistent metrics config file provided |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-17 10:57:16 -0800 |
| Commit: d8f69cf, github.com/apache/spark/pull/4571 |
| |
| [SPARK-5859] [PySpark] [SQL] fix DataFrame Python API |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 10:22:48 -0800 |
| Commit: d8adefe, github.com/apache/spark/pull/4645 |
| |
| [SPARK-5166][SPARK-5247][SPARK-5258][SQL] API Cleanup / Documentation |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-17 10:21:17 -0800 |
| Commit: c74b07f, github.com/apache/spark/pull/4642 |
| |
| [SPARK-5858][MLLIB] Remove unnecessary first() call in GLM |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-17 10:17:45 -0800 |
| Commit: c76da36, github.com/apache/spark/pull/4647 |
| |
| SPARK-5856: In Maven build script, launch Zinc with more memory |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-17 10:10:01 -0800 |
| Commit: 3ce46e9, github.com/apache/spark/pull/4643 |
| |
| Revert "[SPARK-5363] [PySpark] check ending mark in non-block way" |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 07:48:27 -0800 |
| Commit: ee6e3ef |
| |
| [SPARK-5826][Streaming] Fix Configuration not serializable problem |
| jerryshao <saisai.shao@intel.com> |
| 2015-02-17 10:45:18 +0000 |
| Commit: a65766b, github.com/apache/spark/pull/4612 |
| |
| HOTFIX: Style issue causing build break |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 22:10:39 -0800 |
| Commit: c06e42f |
| |
| [SPARK-5802][MLLIB] cache transformed data in glm |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-16 22:09:04 -0800 |
| Commit: fd84229, github.com/apache/spark/pull/4593 |
| |
| [SPARK-5853][SQL] Schema support in Row. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-16 20:42:57 -0800 |
| Commit: d380f32, github.com/apache/spark/pull/4640 |
| |
| SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 20:33:33 -0800 |
| Commit: a51d51f, github.com/apache/spark/pull/4638 |
| |
| [SPARK-5363] [PySpark] check ending mark in non-block way |
| Davies Liu <davies@databricks.com> |
| 2015-02-16 20:32:03 -0800 |
| Commit: ac6fe67, github.com/apache/spark/pull/4601 |
| |
| [SQL] Various DataFrame doc changes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-16 19:00:30 -0800 |
| Commit: 0e180bf, github.com/apache/spark/pull/4636 |
| |
| [SPARK-5849] Handle more types of invalid JSON requests in SubmitRestProtocolMessage.parseAction |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-16 18:08:02 -0800 |
| Commit: 58a82a7, github.com/apache/spark/pull/4637 |
| |
| [SPARK-3340] Deprecate ADD_JARS and ADD_FILES |
| azagrebin <azagrebin@gmail.com> |
| 2015-02-16 18:06:19 -0800 |
| Commit: 1668765, github.com/apache/spark/pull/4616 |
| |
| [SPARK-5788] [PySpark] capture the exception in python write thread |
| Davies Liu <davies@databricks.com> |
| 2015-02-16 17:57:14 -0800 |
| Commit: b1bd1dd, github.com/apache/spark/pull/4577 |
| |
| SPARK-5848: tear down the ConsoleProgressBar timer |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-17 00:59:49 +0000 |
| Commit: 1294a6e, github.com/apache/spark/pull/4635 |
| |
| [SPARK-4865][SQL]Include temporary tables in SHOW TABLES |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:59:23 -0800 |
| Commit: e189cbb, github.com/apache/spark/pull/4618 |
| |
| [SQL] Optimize arithmetic and predicate operators |
| kai <kaizeng@eecs.berkeley.edu> |
| 2015-02-16 15:58:05 -0800 |
| Commit: cb6c48c, github.com/apache/spark/pull/4472 |
| |
| [SPARK-5839][SQL]HiveMetastoreCatalog does not recognize table names and aliases of data source tables. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:54:01 -0800 |
| Commit: f3ff1eb, github.com/apache/spark/pull/4626 |
| |
| [SPARK-5746][SQL] Check invalid cases for the write path of data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:51:59 -0800 |
| Commit: 5b6cd65, github.com/apache/spark/pull/4617 |
| |
| HOTFIX: Break in Jekyll build from #4589 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 15:43:56 -0800 |
| Commit: 04b401d |
| |
| [SPARK-2313] Use socket to communicate GatewayServer port back to Python driver |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-16 15:25:11 -0800 |
| Commit: 0cfda84, github.com/apache/spark/pull/3424. |
| |
| SPARK-5357: Update commons-codec version to 1.10 (current) |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-16 23:05:34 +0000 |
| Commit: c01c4eb, github.com/apache/spark/pull/4153 |
| |
| SPARK-5841: remove DiskBlockManager shutdown hook on stop |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-16 22:54:32 +0000 |
| Commit: bb05982, github.com/apache/spark/pull/4627 |
| |
| [SPARK-5833] [SQL] Adds REFRESH TABLE command |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 12:52:05 -0800 |
| Commit: c51ab37, github.com/apache/spark/pull/4624 |
| |
| [SPARK-5296] [SQL] Add more filter types for data sources API |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 12:48:55 -0800 |
| Commit: 6f54dee, github.com/apache/spark/pull/4623 |
| |
| [SQL] Add fetched row count in SparkSQLCLIDriver |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-16 12:34:09 -0800 |
| Commit: b4d7c70, github.com/apache/spark/pull/4604 |
| |
| [SQL] Initial support for reporting location of error in sql string |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-16 12:32:56 -0800 |
| Commit: 104b2c4, github.com/apache/spark/pull/4587 |
| |
| [SPARK-5824] [SQL] add null format in ctas and set default col comment to null |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-16 12:31:36 -0800 |
| Commit: 275a0c0, github.com/apache/spark/pull/4609 |
| |
| [SQL] [Minor] Update the SpecificMutableRow.copy |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-16 12:21:08 -0800 |
| Commit: cc552e0, github.com/apache/spark/pull/4619 |
| |
| SPARK-5795 [STREAMING] api.java.JavaPairDStream.saveAsNewAPIHadoopFiles may not friendly to java |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-16 19:32:31 +0000 |
| Commit: 8e25373, github.com/apache/spark/pull/4608 |
| |
| Minor fixes for commit https://github.com/apache/spark/pull/4592. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-16 10:09:55 -0800 |
| Commit: 9baac56 |
| |
| [SPARK-5799][SQL] Compute aggregation function on specified numeric columns |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-16 10:06:11 -0800 |
| Commit: 5c78be7, github.com/apache/spark/pull/4592 |
| |
| SPARK-5815 [MLLIB] Part 2. Deprecate SVDPlusPlus APIs that expose DoubleMatrix from JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-16 17:04:30 +0000 |
| Commit: a3afa4a, github.com/apache/spark/pull/4625 |
| |
| [SPARK-5831][Streaming]When checkpoint file size is bigger than 10, then delete the old ones |
| Xutingjun <1039320815@qq.com> |
| 2015-02-16 14:54:23 +0000 |
| Commit: 1115e8e, github.com/apache/spark/pull/4621 |
| |
| [SPARK-4553] [SPARK-5767] [SQL] Wires Parquet data source with the newly introduced write support for data source API |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 01:38:31 -0800 |
| Commit: 3ce58cf, github.com/apache/spark/pull/4563 |
| |
| [Minor] [SQL] Renames stringRddToDataFrame to stringRddToDataFrameHolder for consistency |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 01:33:37 -0800 |
| Commit: 199a9e8, github.com/apache/spark/pull/4613 |
| |
| [Ml] SPARK-5804 Explicitly manage cache in Crossvalidator k-fold loop |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-02-16 00:07:23 -0800 |
| Commit: d51d6ba, github.com/apache/spark/pull/4595 |
| |
| [Ml] SPARK-5796 Don't transform data on a last estimator in Pipeline |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-02-15 20:51:32 -0800 |
| Commit: c78a12c, github.com/apache/spark/pull/4590 |
| |
| SPARK-5815 [MLLIB] Deprecate SVDPlusPlus APIs that expose DoubleMatrix from JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-15 20:41:27 -0800 |
| Commit: acf2558, github.com/apache/spark/pull/4614 |
| |
| [SPARK-5769] Set params in constructors and in setParams in Python ML pipelines |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-15 20:29:26 -0800 |
| Commit: cd4a153, github.com/apache/spark/pull/4564 |
| |
| SPARK-5669 [BUILD] Spark assembly includes incompatibly licensed libgfortran, libgcc code via JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-15 09:15:48 -0800 |
| Commit: 836577b, github.com/apache/spark/pull/4453 |
| |
| [MLLIB][SPARK-5502] User guide for isotonic regression |
| martinzapletal <zapletal-martin@email.cz> |
| 2015-02-15 09:10:03 -0800 |
| Commit: 61eb126, github.com/apache/spark/pull/4536 |
| |
| [SPARK-5827][SQL] Add missing import in the example of SqlContext |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2015-02-15 14:42:20 +0000 |
| Commit: c771e47, github.com/apache/spark/pull/4615 |
| |
| SPARK-5822 [BUILD] cannot import src/main/scala & src/test/scala into eclipse as source folder |
| gli <gli@redhat.com> |
| 2015-02-14 20:43:27 +0000 |
| Commit: ed5f4bb, github.com/apache/spark/pull/4531 |
| |
| Revise formatting of previous commit f80e2629bb74bc62960c61ff313f7e7802d61319 |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-14 20:12:29 +0000 |
| Commit: 15a2ab5 |
| |
| [SPARK-5800] Streaming Docs. Change linked files according the selected language |
| gasparms <gmunoz@stratio.com> |
| 2015-02-14 20:10:29 +0000 |
| Commit: f80e262, github.com/apache/spark/pull/4589 |
| |
| [SPARK-5752][SQL] Don't implicitly convert RDDs directly to DataFrames |
| Reynold Xin <rxin@databricks.com>, Davies Liu <davies@databricks.com> |
| 2015-02-13 23:03:22 -0800 |
| Commit: e98dfe6, github.com/apache/spark/pull/4556 |
| |
| SPARK-3290 [GRAPHX] No unpersist callls in SVDPlusPlus |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-13 20:12:52 -0800 |
| Commit: 0ce4e43, github.com/apache/spark/pull/4234 |
| |
| [SPARK-5227] [SPARK-5679] Disable FileSystem cache in WholeTextFileRecordReaderSuite |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-13 17:45:31 -0800 |
| Commit: d06d5ee, github.com/apache/spark/pull/4599 |
| |
| [SPARK-5730][ML] add doc groups to spark.ml components |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 16:45:59 -0800 |
| Commit: 4f4c6d5, github.com/apache/spark/pull/4600 |
| |
| [SPARK-5803][MLLIB] use ArrayBuilder to build primitive arrays |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 16:43:49 -0800 |
| Commit: d50a91d, github.com/apache/spark/pull/4594 |
| |
| [SPARK-5806] re-organize sections in mllib-clustering.md |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 15:09:27 -0800 |
| Commit: cc56c87, github.com/apache/spark/pull/4598 |
| |
| [SPARK-5789][SQL]Throw a better error message if JsonRDD.parseJson encounters unrecoverable parsing errors. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-13 13:51:06 -0800 |
| Commit: 2e0c084, github.com/apache/spark/pull/4582 |
| |
| [SPARK-5642] [SQL] Apply column pruning on unused aggregation fields |
| Daoyuan Wang <daoyuan.wang@intel.com>, Michael Armbrust <michael@databricks.com> |
| 2015-02-13 13:46:50 -0800 |
| Commit: 2cbb3e4, github.com/apache/spark/pull/4415 |
| |
| [HOTFIX] Fix build break in MesosSchedulerBackendSuite |
| Andrew Or <andrew@databricks.com> |
| 2015-02-13 13:10:29 -0800 |
| Commit: 5d3cc6b |
| |
| [HOTFIX] Ignore DirectKafkaStreamSuite. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-13 12:43:53 -0800 |
| Commit: 378c7eb |
| |
| SPARK-5805 Fixed the type error in documentation. |
| Emre SevinƧ <emre.sevinc@gmail.com> |
| 2015-02-13 12:31:27 -0800 |
| Commit: 9f31db0, github.com/apache/spark/pull/4596 |
| |
| [SPARK-5735] Replace uses of EasyMock with Mockito |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-13 09:53:57 -0800 |
| Commit: 077eec2, github.com/apache/spark/pull/4578 |
| |
| [SPARK-5783] Better eventlog-parsing error messages |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-13 09:47:26 -0800 |
| Commit: fc6d3e7, github.com/apache/spark/pull/4573 |
| |
| [SPARK-5503][MLLIB] Example code for Power Iteration Clustering |
| sboeschhuawei <stephen.boesch@huawei.com> |
| 2015-02-13 09:45:57 -0800 |
| Commit: e1a1ff8, github.com/apache/spark/pull/4495 |
| |
| [SPARK-5732][CORE]:Add an option to print the spark version in spark script. |
| uncleGen <hustyugm@gmail.com>, genmao.ygm <genmao.ygm@alibaba-inc.com> |
| 2015-02-13 09:43:10 -0800 |
| Commit: c0ccd25, github.com/apache/spark/pull/4522 |
| |
| [SPARK-4832][Deploy]some other processes might take the daemon pid |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-02-13 10:27:23 +0000 |
| Commit: 1768bd5, github.com/apache/spark/pull/3683 |
| |
| [SPARK-3365][SQL]Wrong schema generated for List type |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-02-12 22:18:39 -0800 |
| Commit: 1c8633f, github.com/apache/spark/pull/4581 |
| |
| [SQL] Fix docs of SQLContext.tables |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 20:37:55 -0800 |
| Commit: 2aea892, github.com/apache/spark/pull/4579 |
| |
| [SPARK-3299][SQL]Public API in SQLContext to list tables |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 18:08:01 -0800 |
| Commit: 1d0596a, github.com/apache/spark/pull/4547 |
| |
| [SQL] Move SaveMode to SQL package. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 15:32:17 -0800 |
| Commit: c025a46, github.com/apache/spark/pull/4542 |
| |
| [SPARK-5335] Fix deletion of security groups within a VPC |
| Vladimir Grigor <vladimir@kiosked.com>, Vladimir Grigor <vladimir@voukka.com> |
| 2015-02-12 23:26:24 +0000 |
| Commit: ada993e, github.com/apache/spark/pull/4122 |
| |
| [SPARK-5755] [SQL] remove unnecessary Add |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-12 15:22:07 -0800 |
| Commit: d5fc514, github.com/apache/spark/pull/4551 |
| |
| [SPARK-5573][SQL] Add explode to dataframes |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-12 15:19:19 -0800 |
| Commit: ee04a8b, github.com/apache/spark/pull/4546 |
| |
| [SPARK-5758][SQL] Use LongType as the default type for integers in JSON schema inference. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 15:17:25 -0800 |
| Commit: c352ffb, github.com/apache/spark/pull/4544 |
| |
| [SPARK-5780] [PySpark] Mute the logging during unit tests |
| Davies Liu <davies@databricks.com> |
| 2015-02-12 14:54:38 -0800 |
| Commit: 0bf0315, github.com/apache/spark/pull/4572 |
| |
| SPARK-5747: Fix wordsplitting bugs in make-distribution.sh |
| David Y. Ross <dyross@gmail.com> |
| 2015-02-12 14:52:38 -0800 |
| Commit: 26c816e, github.com/apache/spark/pull/4540 |
| |
| [SPARK-5759][Yarn]ExecutorRunnable should catch YarnException while NMClient start contain... |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-12 14:50:16 -0800 |
| Commit: 947b8bd, github.com/apache/spark/pull/4554 |
| |
| [SPARK-5760][SPARK-5761] Fix standalone rest protocol corner cases + revamp tests |
| Andrew Or <andrew@databricks.com> |
| 2015-02-12 14:47:52 -0800 |
| Commit: 1d5663e, github.com/apache/spark/pull/4557 |
| |
| [SPARK-5762] Fix shuffle write time for sort-based shuffle |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-12 14:46:37 -0800 |
| Commit: 47c73d4, github.com/apache/spark/pull/4559 |
| |
| [SPARK-5765][Examples]Fixed word split problem in run-example and compute-classpath |
| Venkata Ramana G <ramana.gollamudihuawei.com>, Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-02-12 14:44:21 -0800 |
| Commit: 629d014, github.com/apache/spark/pull/4561 |
| |
| [EC2] Update default Spark version to 1.2.1 |
| Katsunori Kanda <potix2@gmail.com> |
| 2015-02-12 14:38:42 -0800 |
| Commit: 9c80765, github.com/apache/spark/pull/4566 |
| |
| [SPARK-5645] Added local read bytes/time to task metrics |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-12 14:35:44 -0800 |
| Commit: 893d6fd, github.com/apache/spark/pull/4510 |
| |
| [SQL] Improve error messages |
| Michael Armbrust <michael@databricks.com>, wangfei <wangfei1@huawei.com> |
| 2015-02-12 13:11:28 -0800 |
| Commit: aa4ca8b, github.com/apache/spark/pull/4558 |
| |
| [SQL][DOCS] Update sql documentation |
| Antonio Navarro Perez <ajnavarro@users.noreply.github.com> |
| 2015-02-12 12:46:17 -0800 |
| Commit: 6a1be02, github.com/apache/spark/pull/4560 |
| |
| SPARK-5776 JIRA version not of form x.y.z breaks merge_spark_pr.py |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-12 20:14:45 +0000 |
| Commit: bc57789, github.com/apache/spark/pull/4570 |
| |
| [SPARK-5757][MLLIB] replace SQL JSON usage in model import/export by json4s |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-12 10:48:13 -0800 |
| Commit: 99bd500, github.com/apache/spark/pull/4555 |
| |
| [SPARK-5655] Don't chmod700 application files if running in YARN |
| Andrew Rowson <github@growse.com> |
| 2015-02-12 18:41:39 +0000 |
| Commit: 466b1f6, github.com/apache/spark/pull/4509 |
| |
| ignore cache paths for RAT tests |
| Oren Mazor <oren.mazor@gmail.com> |
| 2015-02-12 18:37:00 +0000 |
| Commit: 9a6efbc, github.com/apache/spark/pull/4569 |
| |
| SPARK-5727 [BUILD] Remove Debian packaging |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-12 12:36:26 +0000 |
| Commit: 9a3ea49, github.com/apache/spark/pull/4526 |
| |
| [SQL] Make dataframe more tolerant of being serialized |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-11 19:05:49 -0800 |
| Commit: a38e23c, github.com/apache/spark/pull/4545 |
| |
| [SQL] Two DataFrame fixes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-11 18:32:48 -0800 |
| Commit: d931b01, github.com/apache/spark/pull/4543 |
| |
| [SPARK-3688][SQL] More inline comments for LogicalPlan. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-11 15:26:31 -0800 |
| Commit: fa6bdc6, github.com/apache/spark/pull/4539 |
| |
| [SPARK-3688][SQL]LogicalPlan can't resolve column correctlly |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-02-11 12:50:17 -0800 |
| Commit: 44b2311, github.com/apache/spark/pull/4524 |
| |
| [SPARK-5454] More robust handling of self joins |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-11 12:31:56 -0800 |
| Commit: a60d2b7, github.com/apache/spark/pull/4520 |
| |
| Remove outdated remark about take(n). |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2015-02-11 20:24:17 +0000 |
| Commit: 03bf704, github.com/apache/spark/pull/4533 |
| |
| [SPARK-5677] [SPARK-5734] [SQL] [PySpark] Python DataFrame API remaining tasks |
| Davies Liu <davies@databricks.com> |
| 2015-02-11 12:13:16 -0800 |
| Commit: b694eb9, github.com/apache/spark/pull/4528 |
| |
| [SPARK-5733] Error Link in Pagination of HistroyPage when showing Incomplete Applications |
| guliangliang <guliangliang@qiyi.com> |
| 2015-02-11 15:55:49 +0000 |
| Commit: 1ac099e, github.com/apache/spark/pull/4523 |
| |
| SPARK-5727 [BUILD] Deprecate Debian packaging |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-11 08:30:16 +0000 |
| Commit: bd0d6e0, github.com/apache/spark/pull/4516 |
| |
| SPARK-5728 [STREAMING] MQTTStreamSuite leaves behind ActiveMQ database files |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-11 08:13:51 +0000 |
| Commit: da89720, github.com/apache/spark/pull/4517 |
| |
| [SPARK-4964] [Streaming] refactor createRDD to take leaders via map instead of array |
| cody koeninger <cody@koeninger.org> |
| 2015-02-11 00:13:27 -0800 |
| Commit: 658687b, github.com/apache/spark/pull/4511 |
| |
| HOTFIX: Adding Junit to Hive tests for Maven build |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 23:39:21 -0800 |
| Commit: c2131c0 |
| |
| HOTFIX: Java 6 compilation error in Spark SQL |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 22:43:32 -0800 |
| Commit: 7e2f882 |
| |
| [SPARK-5714][Mllib] Refactor initial step of LDA to remove redundant operations |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-10 21:51:15 -0800 |
| Commit: f86a89a, github.com/apache/spark/pull/4501 |
| |
| [SPARK-5702][SQL] Allow short names for built-in data sources. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-10 20:40:21 -0800 |
| Commit: b8f88d3, github.com/apache/spark/pull/4489 |
| |
| [SPARK-5729] Potential NPE in standalone REST API |
| Andrew Or <andrew@databricks.com> |
| 2015-02-10 20:19:14 -0800 |
| Commit: b969182, github.com/apache/spark/pull/4518 |
| |
| [SPARK-4879] Use driver to coordinate Hadoop output committing for speculative tasks |
| mcheah <mcheah@palantir.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-02-10 20:12:18 -0800 |
| Commit: 1cb3770, github.com/apache/spark/pull/4155. |
| |
| [SQL][DataFrame] Fix column computability bug. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-10 19:50:44 -0800 |
| Commit: 7e24249, github.com/apache/spark/pull/4519 |
| |
| [SPARK-5709] [SQL] Add EXPLAIN support in DataFrame API for debugging purpose |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-10 19:40:51 -0800 |
| Commit: 45df77b, github.com/apache/spark/pull/4496 |
| |
| [SPARK-5704] [SQL] [PySpark] createDataFrame from RDD with columns |
| Davies Liu <davies@databricks.com> |
| 2015-02-10 19:40:12 -0800 |
| Commit: ea60284, github.com/apache/spark/pull/4498 |
| |
| [SPARK-5683] [SQL] Avoid multiple json generator created |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-10 18:19:56 -0800 |
| Commit: a60aea8, github.com/apache/spark/pull/4468 |
| |
| [SQL] Add an exception for analysis errors. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-10 17:32:42 -0800 |
| Commit: 6195e24, github.com/apache/spark/pull/4439 |
| |
| [SPARK-5658][SQL] Finalize DDL and write support APIs |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-10 17:29:52 -0800 |
| Commit: aaf50d0, github.com/apache/spark/pull/4446 |
| |
| [SPARK-5493] [core] Add option to impersonate user. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-10 17:19:10 -0800 |
| Commit: ed167e7, github.com/apache/spark/pull/4405 |
| |
| [SQL] Make Options in the data source API CREATE TABLE statements optional. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-10 17:06:12 -0800 |
| Commit: e28b6bd, github.com/apache/spark/pull/4515 |
| |
| [SPARK-5725] [SQL] Fixes ParquetRelation2.equals |
| Cheng Lian <lian@databricks.com> |
| 2015-02-10 17:02:44 -0800 |
| Commit: 2d50a01, github.com/apache/spark/pull/4513 |
| |
| [SQL][Minor] correct some comments |
| Sheng, Li <OopsOutOfMemory@users.noreply.github.com>, OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-11 00:59:46 +0000 |
| Commit: 91e3512, github.com/apache/spark/pull/4508 |
| |
| [SPARK-5644] [Core]Delete tmp dir when sc is stop |
| Sephiroth-Lin <linwzhong@gmail.com> |
| 2015-02-10 23:23:35 +0000 |
| Commit: 52983d7, github.com/apache/spark/pull/4412 |
| |
| [SPARK-5343][GraphX]: ShortestPaths traverses backwards |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-10 14:57:00 -0800 |
| Commit: 5820961, github.com/apache/spark/pull/4478 |
| |
| [SPARK-5021] [MLlib] Gaussian Mixture now supports Sparse Input |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-10 14:05:55 -0800 |
| Commit: fd2c032, github.com/apache/spark/pull/4459 |
| |
| [SPARK-5686][SQL] Add show current roles command in HiveQl |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-10 13:20:15 -0800 |
| Commit: f98707c, github.com/apache/spark/pull/4471 |
| |
| [SQL] Add toString to DataFrame/Column |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-10 13:14:01 -0800 |
| Commit: de80b1b, github.com/apache/spark/pull/4436 |
| |
| [SPARK-5668] Display region in spark_ec2.py get_existing_cluster() |
| Miguel Peralvo <miguel.peralvo@gmail.com> |
| 2015-02-10 19:54:52 +0000 |
| Commit: c49a404, github.com/apache/spark/pull/4457 |
| |
| [SPARK-5592][SQL] java.net.URISyntaxException when insert data to a partitioned table |
| wangfei <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-02-10 11:54:30 -0800 |
| Commit: 59272da, github.com/apache/spark/pull/4368 |
| |
| [HOTFIX][SPARK-4136] Fix compilation and tests |
| Andrew Or <andrew@databricks.com> |
| 2015-02-10 11:18:01 -0800 |
| Commit: b640c84 |
| |
| SPARK-4136. Under dynamic allocation, cancel outstanding executor requests when no longer needed |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-10 11:07:25 -0800 |
| Commit: 69bc3bb, github.com/apache/spark/pull/4168 |
| |
| [SPARK-5716] [SQL] Support TOK_CHARSETLITERAL in HiveQl |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-10 11:08:21 -0800 |
| Commit: c7ad80a, github.com/apache/spark/pull/4502 |
| |
| [Spark-5717] [MLlib] add stop and reorganize import |
| JqueryFan <firing@126.com>, Yuhao Yang <hhbyyh@gmail.com> |
| 2015-02-10 17:37:32 +0000 |
| Commit: 6cc96cf, github.com/apache/spark/pull/4503 |
| |
| [SPARK-1805] [EC2] Validate instance types |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-10 15:45:38 +0000 |
| Commit: 50820f1, github.com/apache/spark/pull/4455 |
| |
| [SPARK-5700] [SQL] [Build] Bumps jets3t to 0.9.3 for hadoop-2.3 and hadoop-2.4 profiles |
| Cheng Lian <lian@databricks.com> |
| 2015-02-10 02:28:47 -0800 |
| Commit: ba66793, github.com/apache/spark/pull/4499 |
| |
| SPARK-5239 [CORE] JdbcRDD throws "java.lang.AbstractMethodError: oracle.jdbc.driver.xxxxxx.isClosed()Z" |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-10 09:19:01 +0000 |
| Commit: 2d1e916, github.com/apache/spark/pull/4470 |
| |
| [SPARK-4964][Streaming][Kafka] More updates to Exactly-once Kafka stream |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-09 22:45:48 -0800 |
| Commit: c151346, github.com/apache/spark/pull/4384 |
| |
| [SPARK-5597][MLLIB] save/load for decision trees and emsembles |
| Joseph K. Bradley <joseph@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-09 22:09:07 -0800 |
| Commit: ef2f55b, github.com/apache/spark/pull/4444. |
| |
| [SQL] Remove the duplicated code |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-09 21:33:34 -0800 |
| Commit: bd0b5ea, github.com/apache/spark/pull/4494 |
| |
| [SPARK-5701] Only set ShuffleReadMetrics when task has shuffle deps |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-09 21:22:09 -0800 |
| Commit: a2d33d0, github.com/apache/spark/pull/4488 |
| |
| [SPARK-5703] AllJobsPage throws empty.max exception |
| Andrew Or <andrew@databricks.com> |
| 2015-02-09 21:18:48 -0800 |
| Commit: a95ed52, github.com/apache/spark/pull/4490 |
| |
| [SPARK-2996] Implement userClassPathFirst for driver, yarn. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-09 21:17:06 -0800 |
| Commit: 20a6013, github.com/apache/spark/pull/3233 |
| |
| SPARK-4900 [MLLIB] MLlib SingularValueDecomposition ARPACK IllegalStateException |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-09 21:13:58 -0800 |
| Commit: 36c4e1d, github.com/apache/spark/pull/4485 |
| |
| Add a config option to print DAG. |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-02-09 20:58:58 -0800 |
| Commit: 31d435e, github.com/apache/spark/pull/4257 |
| |
| [SPARK-5469] restructure pyspark.sql into multiple files |
| Davies Liu <davies@databricks.com> |
| 2015-02-09 20:49:22 -0800 |
| Commit: 08488c1, github.com/apache/spark/pull/4479 |
| |
| [SPARK-5698] Do not let user request negative # of executors |
| Andrew Or <andrew@databricks.com> |
| 2015-02-09 17:33:29 -0800 |
| Commit: d302c48, github.com/apache/spark/pull/4483 |
| |
| [SPARK-5699] [SQL] [Tests] Runs hive-thriftserver tests whenever SQL code is modified |
| Cheng Lian <lian@databricks.com> |
| 2015-02-09 16:52:05 -0800 |
| Commit: 3ec3ad2, github.com/apache/spark/pull/4486 |
| |
| [SPARK-5648][SQL] support "alter ... unset tblproperties("key")" |
| DoingDone9 <799203320@qq.com> |
| 2015-02-09 16:40:26 -0800 |
| Commit: d08e7c2, github.com/apache/spark/pull/4424 |
| |
| [SPARK-2096][SQL] support dot notation on array of struct |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-02-09 16:39:34 -0800 |
| Commit: 0ee53eb, github.com/apache/spark/pull/2405 |
| |
| [SPARK-5614][SQL] Predicate pushdown through Generate. |
| Lu Yan <luyan02@baidu.com> |
| 2015-02-09 16:25:38 -0800 |
| Commit: 2a36292, github.com/apache/spark/pull/4394 |
| |
| [SPARK-5696] [SQL] [HOTFIX] Asks HiveThriftServer2 to re-initialize log4j using Hive configurations |
| Cheng Lian <lian@databricks.com> |
| 2015-02-09 16:23:12 -0800 |
| Commit: b8080aa, github.com/apache/spark/pull/4484 |
| |
| [SQL] Code cleanup. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-09 16:20:42 -0800 |
| Commit: 5f0b30e, github.com/apache/spark/pull/4482 |
| |
| [SQL] Add some missing DataFrame functions. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-09 16:02:56 -0800 |
| Commit: 68b25cf, github.com/apache/spark/pull/4437 |
| |
| [SPARK-5611] [EC2] Allow spark-ec2 repo and branch to be set on CLI of spark_ec2.py |
| Florian Verhein <florian.verhein@gmail.com> |
| 2015-02-09 23:47:07 +0000 |
| Commit: b884daa, github.com/apache/spark/pull/4385 |
| |
| [SPARK-5675][SQL] XyzType companion object should subclass XyzType |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-09 14:51:46 -0800 |
| Commit: f48199e, github.com/apache/spark/pull/4463 |
| |
| [SPARK-4905][STREAMING] FlumeStreamSuite fix. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-09 14:17:14 -0800 |
| Commit: 0765af9, github.com/apache/spark/pull/4371 |
| |
| [SPARK-5691] Fixing wrong data structure lookup for dupe app registratio... |
| mcheah <mcheah@palantir.com> |
| 2015-02-09 13:20:14 -0800 |
| Commit: 6fe70d8, github.com/apache/spark/pull/4477 |
| |
| [SPARK-5664][BUILD] Restore stty settings when exiting from SBT's spark-shell |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-09 11:45:12 -0800 |
| Commit: dae2161, github.com/apache/spark/pull/4451 |
| |
| [SPARK-5678] Convert DataFrame to pandas.DataFrame and Series |
| Davies Liu <davies@databricks.com> |
| 2015-02-09 11:42:52 -0800 |
| Commit: afb1316, github.com/apache/spark/pull/4476 |
| |
| SPARK-4267 [YARN] Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-09 10:33:57 -0800 |
| Commit: de78060, github.com/apache/spark/pull/4452 |
| |
| SPARK-2149. [MLLIB] Univariate kernel density estimation |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-09 10:12:12 +0000 |
| Commit: 0793ee1, github.com/apache/spark/pull/1093 |
| |
| [SPARK-5473] [EC2] Expose SSH failures after status checks pass |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-09 09:44:53 +0000 |
| Commit: 4dfe180, github.com/apache/spark/pull/4262 |
| |
| [SPARK-5539][MLLIB] LDA guide |
| Xiangrui Meng <meng@databricks.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-08 23:40:36 -0800 |
| Commit: 855d12a, github.com/apache/spark/pull/4465 |
| |
| [SPARK-5472][SQL] Fix Scala code style |
| Hung Lin <hung@zoomdata.com> |
| 2015-02-08 22:36:42 -0800 |
| Commit: 4575c56, github.com/apache/spark/pull/4464 |
| |
| SPARK-4405 [MLLIB] Matrices.* construction methods should check for rows x cols overflow |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-08 21:08:50 -0800 |
| Commit: 4396dfb, github.com/apache/spark/pull/4461 |
| |
| [SPARK-5660][MLLIB] Make Matrix apply public |
| Joseph K. Bradley <joseph@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-08 21:07:36 -0800 |
| Commit: c171611, github.com/apache/spark/pull/4447 |
| |
| [SPARK-5643][SQL] Add a show method to print the content of a DataFrame in tabular format. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-08 18:56:51 -0800 |
| Commit: a052ed4, github.com/apache/spark/pull/4416 |
| |
| SPARK-5665 [DOCS] Update netlib-java documentation |
| Sam Halliday <sam.halliday@Gmail.com>, Sam Halliday <sam.halliday@gmail.com> |
| 2015-02-08 16:34:26 -0800 |
| Commit: 56aff4b, github.com/apache/spark/pull/4448 |
| |
| [SPARK-5598][MLLIB] model save/load for ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-08 16:26:20 -0800 |
| Commit: 5c299c5, github.com/apache/spark/pull/4422 |
| |
| [SQL] Set sessionState in QueryExecution. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-08 14:55:07 -0800 |
| Commit: 804949d, github.com/apache/spark/pull/4445 |
| |
| [SPARK-3039] [BUILD] Spark assembly for new hadoop API (hadoop 2) contai... |
| medale <medale94@yahoo.com> |
| 2015-02-08 10:35:29 +0000 |
| Commit: 75fdccc, github.com/apache/spark/pull/4315 |
| |
| [SPARK-5672][Web UI] Don't return `ERROR 500` when have missing args |
| Kirill A. Korinskiy <catap@catap.ru> |
| 2015-02-08 10:31:46 +0000 |
| Commit: 23a99da, github.com/apache/spark/pull/4239 |
| |
| [SPARK-5656] Fail gracefully for large values of k and/or n that will ex... |
| mbittmann <mbittmann@gmail.com>, bittmannm <mark.bittmann@agilex.com> |
| 2015-02-08 10:13:29 +0000 |
| Commit: 4878313, github.com/apache/spark/pull/4433 |
| |
| [SPARK-5366][EC2] Check the mode of private key |
| liuchang0812 <liuchang0812@gmail.com> |
| 2015-02-08 10:08:51 +0000 |
| Commit: 6fb141e, github.com/apache/spark/pull/4162 |
| |
| [SPARK-5671] Upgrade jets3t to 0.9.2 in hadoop-2.3 and 2.4 profiles |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-07 17:19:08 -0800 |
| Commit: 5de14cc, github.com/apache/spark/pull/4454 |
| |
| [SPARK-5108][BUILD] Jackson dependency management for Hadoop-2.6.0 support |
| Zhan Zhang <zhazhan@gmail.com> |
| 2015-02-07 19:41:30 +0000 |
| Commit: ecbbed2, github.com/apache/spark/pull/3938 |
| |
| SPARK-5408: Use -XX:MaxPermSize specified by user instead of default in ... |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-07 15:58:04 +0000 |
| Commit: dd4cb33, github.com/apache/spark/pull/4203 |
| |
| [BUILD] Add the ability to launch spark-shell from SBT. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-07 00:14:38 -0800 |
| Commit: e9a4fe1, github.com/apache/spark/pull/4438 |
| |
| [SPARK-5388] Provide a stable application submission gateway for standalone cluster mode |
| Andrew Or <andrew@databricks.com> |
| 2015-02-06 15:57:06 -0800 |
| Commit: 1390e56, github.com/apache/spark/pull/4216 |
| |
| SPARK-5403: Ignore UserKnownHostsFile in SSH calls |
| Grzegorz Dubicki <grzegorz.dubicki@gmail.com> |
| 2015-02-06 15:43:58 -0800 |
| Commit: e772b4e, github.com/apache/spark/pull/4196 |
| |
| [SPARK-5601][MLLIB] make streaming linear algorithms Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-06 15:42:59 -0800 |
| Commit: 0e23ca9, github.com/apache/spark/pull/4432 |
| |
| [SQL] [Minor] HiveParquetSuite was disabled by mistake, re-enable them |
| Cheng Lian <lian@databricks.com> |
| 2015-02-06 15:23:42 -0800 |
| Commit: c402140, github.com/apache/spark/pull/4440 |
| |
| [SQL] Use TestSQLContext in Java tests |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-06 15:11:02 -0800 |
| Commit: 76c4bf5, github.com/apache/spark/pull/4441 |
| |
| [SPARK-4994][network]Cleanup removed executors' ShuffleInfo in yarn shuffle service |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 14:47:52 -0800 |
| Commit: 61073f8, github.com/apache/spark/pull/3828 |
| |
| [SPARK-5444][Network]Add a retry to deal with the conflict port in netty server. |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-02-06 14:35:29 -0800 |
| Commit: 2bda1c1, github.com/apache/spark/pull/4240 |
| |
| [SPARK-4874] [CORE] Collect record count metrics |
| Kostas Sakellis <kostas@cloudera.com> |
| 2015-02-06 14:31:20 -0800 |
| Commit: dcd1e42, github.com/apache/spark/pull/4067 |
| |
| [HOTFIX] Fix the maven build after adding sqlContext to spark-shell |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-06 14:27:06 -0800 |
| Commit: 5796156, github.com/apache/spark/pull/4443 |
| |
| [SPARK-5600] [core] Clean up FsHistoryProvider test, fix app sort order. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-06 14:23:09 -0800 |
| Commit: 5687bab, github.com/apache/spark/pull/4370 |
| |
| SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread from getting killed on yarn restart. |
| Kashish Jain <kashish.jain@guavus.com> |
| 2015-02-06 13:47:23 -0800 |
| Commit: ca66159, github.com/apache/spark/pull/4392 |
| |
| SPARK-5633 pyspark saveAsTextFile support for compression codec |
| Vladimir Vladimirov <vladimir.vladimirov@magnetic.com> |
| 2015-02-06 13:55:02 -0800 |
| Commit: b3872e0, github.com/apache/spark/pull/4403 |
| |
| [HOTFIX][MLLIB] fix a compilation error with java 6 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-06 13:52:35 -0800 |
| Commit: 65181b7, github.com/apache/spark/pull/4442 |
| |
| [SPARK-4983] Insert waiting time before tagging EC2 instances |
| GenTang <gen.tang86@gmail.com>, Gen TANG <gen.tang86@gmail.com> |
| 2015-02-06 13:27:34 -0800 |
| Commit: 0f3a360, github.com/apache/spark/pull/3986 |
| |
| [SPARK-5586][Spark Shell][SQL] Make `sqlContext` available in spark shell |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-06 13:20:10 -0800 |
| Commit: 3d3ecd7, github.com/apache/spark/pull/4387 |
| |
| [SPARK-5278][SQL] Introduce UnresolvedGetField and complete the check of ambiguous reference to fields |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-02-06 13:08:09 -0800 |
| Commit: 4793c84, github.com/apache/spark/pull/4068 |
| |
| [SQL][Minor] Remove cache keyword in SqlParser |
| wangfei <wangfei1@huawei.com> |
| 2015-02-06 12:42:23 -0800 |
| Commit: bc36356, github.com/apache/spark/pull/4393 |
| |
| [SQL][HiveConsole][DOC] HiveConsole `correct hiveconsole imports` |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-06 12:41:28 -0800 |
| Commit: b62c352, github.com/apache/spark/pull/4389 |
| |
| [SPARK-5595][SPARK-5603][SQL] Add a rule to do PreInsert type casting and field renaming and invalidating in memory cache after INSERT |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-06 12:38:07 -0800 |
| Commit: 3eccf29, github.com/apache/spark/pull/4373 |
| |
| [SPARK-5324][SQL] Results of describe can't be queried |
| OopsOutOfMemory <victorshengli@126.com>, Sheng, Li <OopsOutOfMemory@users.noreply.github.com> |
| 2015-02-06 12:33:20 -0800 |
| Commit: 0b7eb3f, github.com/apache/spark/pull/4249 |
| |
| [SPARK-5619][SQL] Support 'show roles' in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-02-06 12:29:26 -0800 |
| Commit: a958d60, github.com/apache/spark/pull/4397 |
| |
| [SPARK-5640] Synchronize ScalaReflection where necessary |
| Tobias Schlatter <tobias@meisch.ch> |
| 2015-02-06 12:15:02 -0800 |
| Commit: 500dc2b, github.com/apache/spark/pull/4431 |
| |
| [SPARK-5650][SQL] Support optional 'FROM' clause |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-06 12:13:44 -0800 |
| Commit: d433816, github.com/apache/spark/pull/4426 |
| |
| [SPARK-5628] Add version option to spark-ec2 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-06 12:08:22 -0800 |
| Commit: 70e5b03, github.com/apache/spark/pull/4414 |
| |
| [SPARK-2945][YARN][Doc]add doc for spark.executor.instances |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-02-06 11:57:02 -0800 |
| Commit: d34f79c, github.com/apache/spark/pull/4350 |
| |
| [SPARK-4361][Doc] Add more docs for Hadoop Configuration |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-06 11:50:20 -0800 |
| Commit: af2a2a2, github.com/apache/spark/pull/3225 |
| |
| [HOTFIX] Fix test build break in ExecutorAllocationManagerSuite. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:47:32 -0800 |
| Commit: fb6c0cb |
| |
| [SPARK-5652][Mllib] Use broadcasted weights in LogisticRegressionModel |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-06 11:22:11 -0800 |
| Commit: 80f3bcb, github.com/apache/spark/pull/4429 |
| |
| [SPARK-5555] Enable UISeleniumSuite tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:14:58 -0800 |
| Commit: 0d74bd7, github.com/apache/spark/pull/4334 |
| |
| SPARK-2450 Adds executor log links to Web UI |
| Kostas Sakellis <kostas@cloudera.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:13:00 -0800 |
| Commit: 32e964c, github.com/apache/spark/pull/3486 |
| |
| [SPARK-5618][Spark Core][Minor] Optimise utility code. |
| Makoto Fukuhara <fukuo33@gmail.com> |
| 2015-02-06 11:11:38 -0800 |
| Commit: 4cdb26c, github.com/apache/spark/pull/4396 |
| |
| [SPARK-5593][Core]Replace BlockManagerListener with ExecutorListener in ExecutorAllocationListener |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 11:09:37 -0800 |
| Commit: 6072fcc, github.com/apache/spark/pull/4369 |
| |
| [SPARK-4877] Allow user first classes to extend classes in the parent. |
| Stephen Haberman <stephen@exigencecorp.com> |
| 2015-02-06 11:03:56 -0800 |
| Commit: 9792bec, github.com/apache/spark/pull/3725 |
| |
| [SPARK-5396] Syntax error in spark scripts on windows. |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-02-06 10:58:26 -0800 |
| Commit: c01b985, github.com/apache/spark/pull/4428 |
| |
| [SPARK-5636] Ramp up faster in dynamic allocation |
| Andrew Or <andrew@databricks.com> |
| 2015-02-06 10:54:23 -0800 |
| Commit: fe3740c, github.com/apache/spark/pull/4409 |
| |
| SPARK-4337. [YARN] Add ability to cancel pending requests |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-06 10:53:16 -0800 |
| Commit: 1a88f20, github.com/apache/spark/pull/4141 |
| |
| [SPARK-5653][YARN] In ApplicationMaster rename isDriver to isClusterMode |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 10:48:31 -0800 |
| Commit: cc6e531, github.com/apache/spark/pull/4430 |
| |
| [SPARK-5013] [MLlib] Added documentation and sample data file for GaussianMixture |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-02-06 10:26:51 -0800 |
| Commit: 9ad56ad, github.com/apache/spark/pull/4401 |
| |
| [SPARK-5416] init Executor.threadPool before ExecutorSource |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-06 12:22:25 +0000 |
| Commit: 37d35ab, github.com/apache/spark/pull/4212 |
| |
| [Build] Set all Debian package permissions to 755 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-06 11:38:39 +0000 |
| Commit: cf6778e, github.com/apache/spark/pull/4277 |
| |
| Update ec2-scripts.md |
| Miguel Peralvo <miguel.peralvo@gmail.com> |
| 2015-02-06 11:04:48 +0000 |
| Commit: f827ef4, github.com/apache/spark/pull/4300 |
| |
| [SPARK-5470][Core]use defaultClassLoader to load classes in KryoSerializer |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 11:00:35 +0000 |
| Commit: ed3aac7, github.com/apache/spark/pull/4258 |
| |
| [SPARK-5582] [history] Ignore empty log directories. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-06 10:07:20 +0000 |
| Commit: 8569289, github.com/apache/spark/pull/4352 |
| |
| [SPARK-5157][YARN] Configure more JVM options properly when we use ConcMarkSweepGC for AM. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-06 09:39:12 +0000 |
| Commit: 24dbc50, github.com/apache/spark/pull/3956 |
| |
| [Minor] Remove permission for execution from spark-shell.cmd |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-06 09:33:36 +0000 |
| Commit: f6ba813, github.com/apache/spark/pull/3983 |
| |
| [SPARK-5380][GraphX] Solve an ArrayIndexOutOfBoundsException when build graph with a file format error |
| Leolh <leosandylh@gmail.com> |
| 2015-02-06 09:01:53 +0000 |
| Commit: 575d2df, github.com/apache/spark/pull/4176 |
| |
| [SPARK-4789] [SPARK-4942] [SPARK-5031] [mllib] Standardize ML Prediction APIs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-05 23:43:47 -0800 |
| Commit: dc0c449, github.com/apache/spark/pull/3637 |
| |
| [SPARK-5604][MLLIB] remove checkpointDir from trees |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 23:32:09 -0800 |
| Commit: 6b88825, github.com/apache/spark/pull/4407 |
| |
| [SPARK-5639][SQL] Support DataFrame.renameColumn. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 23:02:40 -0800 |
| Commit: 7dc4965, github.com/apache/spark/pull/4410 |
| |
| Revert "SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2." |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-05 18:36:48 -0800 |
| Commit: 6d3b7cb |
| |
| SPARK-5557: Explicitly include servlet API in dependencies. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-05 18:14:54 -0800 |
| Commit: 793dbae, github.com/apache/spark/pull/4411 |
| |
| [HOTFIX] [SQL] Disables Metastore Parquet table conversion for "SQLQuerySuite.CTAS with serde" |
| Cheng Lian <lian@databricks.com> |
| 2015-02-05 18:09:18 -0800 |
| Commit: 7c0a648, github.com/apache/spark/pull/4413 |
| |
| [SPARK-5638][SQL] Add a config flag to disable eager analysis of DataFrames |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 18:07:10 -0800 |
| Commit: e8a5d50, github.com/apache/spark/pull/4408 |
| |
| [SPARK-5620][DOC] group methods in generated unidoc |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 16:26:51 -0800 |
| Commit: 85ccee8, github.com/apache/spark/pull/4404 |
| |
| [SPARK-5182] [SPARK-5528] [SPARK-5509] [SPARK-3575] [SQL] Parquet data source improvements |
| Cheng Lian <lian@databricks.com> |
| 2015-02-05 15:29:56 -0800 |
| Commit: a9ed511, github.com/apache/spark/pull/4308 |
| |
| [SPARK-5604[MLLIB] remove checkpointDir from LDA |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 15:07:33 -0800 |
| Commit: c19152c, github.com/apache/spark/pull/4390 |
| |
| [SPARK-5460][MLlib] Wrapped `Try` around `deleteAllCheckpoints` - RandomForest. |
| x1- <viva008@gmail.com> |
| 2015-02-05 15:02:04 -0800 |
| Commit: 62371ad, github.com/apache/spark/pull/4347 |
| |
| [SPARK-5135][SQL] Add support for describe table to DDL in SQLContext |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-05 13:07:48 -0800 |
| Commit: 4d8d070, github.com/apache/spark/pull/4227 |
| |
| [SPARK-5617][SQL] fix test failure of SQLQuerySuite |
| wangfei <wangfei1@huawei.com> |
| 2015-02-05 12:44:12 -0800 |
| Commit: a83936e, github.com/apache/spark/pull/4395 |
| |
| [Branch-1.3] [DOC] doc fix for date |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-05 12:42:27 -0800 |
| Commit: 6fa4ac1, github.com/apache/spark/pull/4400 |
| |
| SPARK-5548: Fixed a race condition in AkkaUtilsSuite |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-05 12:00:04 -0800 |
| Commit: 081ac69, github.com/apache/spark/pull/4343 |
| |
| [SPARK-5474][Build]curl should support URL redirection in build/mvn |
| GuoQiang Li <witgo@qq.com> |
| 2015-02-05 12:03:13 -0800 |
| Commit: 3414754, github.com/apache/spark/pull/4263 |
| |
| [SPARK-5608] Improve SEO of Spark documentation pages |
| Matei Zaharia <matei@databricks.com> |
| 2015-02-05 11:12:50 -0800 |
| Commit: 4d74f06, github.com/apache/spark/pull/4381 |
| |
| SPARK-4687. Add a recursive option to the addFile API |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-05 10:15:55 -0800 |
| Commit: c4b1108, github.com/apache/spark/pull/3670 |
| |
| [HOTFIX] MLlib build break. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 00:42:50 -0800 |
| Commit: 6580929 |
| |
| [MLlib] Minor: UDF style update. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 23:57:53 -0800 |
| Commit: c3ba4d4, github.com/apache/spark/pull/4388 |
| |
| [SPARK-5612][SQL] Move DataFrame implicit functions into SQLContext.implicits. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 23:44:34 -0800 |
| Commit: 7d789e1, github.com/apache/spark/pull/4386 |
| |
| [SPARK-5606][SQL] Support plus sign in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-02-04 23:16:01 -0800 |
| Commit: 9d3a75e, github.com/apache/spark/pull/4378 |
| |
| [SPARK-5599] Check MLlib public APIs for 1.3 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-04 23:03:47 -0800 |
| Commit: db34690, github.com/apache/spark/pull/4377 |
| |
| [SPARK-5596] [mllib] ML model import/export for GLMs, NaiveBayes |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-04 22:46:48 -0800 |
| Commit: 975bcef, github.com/apache/spark/pull/4233 |
| |
| SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-04 22:39:44 -0800 |
| Commit: c23ac03, github.com/apache/spark/pull/4383 |
| |
| [SPARK-5602][SQL] Better support for creating DataFrame from local data collection |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:53:57 -0800 |
| Commit: 84acd08, github.com/apache/spark/pull/4372 |
| |
| [SPARK-5538][SQL] Fix flaky CachedTableSuite |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:52:41 -0800 |
| Commit: 206f9bc, github.com/apache/spark/pull/4379 |
| |
| [SQL][DataFrame] Minor cleanup. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:51:48 -0800 |
| Commit: 6b4c7f0, github.com/apache/spark/pull/4374 |
| |
| [SPARK-4520] [SQL] This pr fixes the ArrayIndexOutOfBoundsException as r... |
| Sadhan Sood <sadhan@tellapart.com> |
| 2015-02-04 19:18:06 -0800 |
| Commit: dba98bf, github.com/apache/spark/pull/4148 |
| |
| [SPARK-5605][SQL][DF] Allow using String to specify colum name in DSL aggregate functions |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 18:35:51 -0800 |
| Commit: 1fbd124, github.com/apache/spark/pull/4376 |
| |
| [SPARK-5411] Allow SparkListeners to be specified in SparkConf and loaded when creating SparkContext |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-04 17:18:03 -0800 |
| Commit: 9a7ce70, github.com/apache/spark/pull/4111 |
| |
| [SPARK-5577] Python udf for DataFrame |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 15:55:09 -0800 |
| Commit: dc101b0, github.com/apache/spark/pull/4351 |
| |
| [SPARK-5118][SQL] Fix: create table test stored as parquet as select .. |
| guowei2 <guowei2@asiainfo.com> |
| 2015-02-04 15:26:10 -0800 |
| Commit: e0490e2, github.com/apache/spark/pull/3921 |
| |
| [SQL] Use HiveContext's sessionState in HiveMetastoreCatalog.hiveDefaultTableFilePath |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-04 15:22:40 -0800 |
| Commit: 548c9c2, github.com/apache/spark/pull/4355 |
| |
| [SQL] Correct the default size of TimestampType and expose NumericType |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-04 15:14:49 -0800 |
| Commit: 0d81645, github.com/apache/spark/pull/4314 |
| |
| [SQL][Hiveconsole] Bring hive console code up to date and update README.md |
| OopsOutOfMemory <victorshengli@126.com>, Sheng, Li <OopsOutOfMemory@users.noreply.github.com> |
| 2015-02-04 15:13:54 -0800 |
| Commit: b73d5ff, github.com/apache/spark/pull/4330 |
| |
| [SPARK-5367][SQL] Support star expression in udfs |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-02-04 15:12:07 -0800 |
| Commit: 417d111, github.com/apache/spark/pull/4353 |
| |
| [SPARK-5426][SQL] Add SparkSQL Java API helper methods. |
| kul <kuldeep.bora@gmail.com> |
| 2015-02-04 15:08:37 -0800 |
| Commit: 424cb69, github.com/apache/spark/pull/4243 |
| |
| [SPARK-5587][SQL] Support change database owner |
| wangfei <wangfei1@huawei.com> |
| 2015-02-04 14:35:12 -0800 |
| Commit: b90dd39, github.com/apache/spark/pull/4357 |
| |
| [SPARK-5591][SQL] Fix NoSuchObjectException for CTAS |
| wangfei <wangfei1@huawei.com> |
| 2015-02-04 14:33:07 -0800 |
| Commit: a9f0db1, github.com/apache/spark/pull/4365 |
| |
| [SPARK-4939] move to next locality when no pending tasks |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 14:22:07 -0800 |
| Commit: 0a89b15, github.com/apache/spark/pull/3779 |
| |
| [SPARK-4707][STREAMING] Reliable Kafka Receiver can lose data if the blo... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-04 14:20:44 -0800 |
| Commit: f0500f9, github.com/apache/spark/pull/3655 |
| |
| [SPARK-4964] [Streaming] Exactly-once semantics for Kafka |
| cody koeninger <cody@koeninger.org> |
| 2015-02-04 12:06:34 -0800 |
| Commit: b0c0021, github.com/apache/spark/pull/3798 |
| |
| [SPARK-5588] [SQL] support select/filter by SQL expression |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 11:34:46 -0800 |
| Commit: ac0b2b7, github.com/apache/spark/pull/4359 |
| |
| [SPARK-5585] Flaky test in MLlib python |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 08:54:20 -0800 |
| Commit: 38a416f, github.com/apache/spark/pull/4358 |
| |
| [SPARK-5574] use given name prefix in dir |
| Imran Rashid <irashid@cloudera.com> |
| 2015-02-04 01:02:20 -0800 |
| Commit: 5aa0f21, github.com/apache/spark/pull/4344 |
| |
| [Minor] Fix incorrect warning log |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-04 00:52:41 -0800 |
| Commit: a74cbbf, github.com/apache/spark/pull/4360 |
| |
| [SPARK-5379][Streaming] Add awaitTerminationOrTimeout |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-04 00:40:28 -0800 |
| Commit: 4cf4cba, github.com/apache/spark/pull/4171 |
| |
| [SPARK-5341] Use maven coordinates as dependencies in spark-shell and spark-submit |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-03 22:39:17 -0800 |
| Commit: 6aed719, github.com/apache/spark/pull/4215 |
| |
| [SPARK-4939] revive offers periodically in LocalBackend |
| Davies Liu <davies@databricks.com> |
| 2015-02-03 22:30:23 -0800 |
| Commit: 83de71c, github.com/apache/spark/pull/4147 |
| |
| [SPARK-4969][STREAMING][PYTHON] Add binaryRecords to streaming |
| freeman <the.freeman.lab@gmail.com> |
| 2015-02-03 22:24:30 -0800 |
| Commit: 242b4f0, github.com/apache/spark/pull/3803 |
| |
| [SPARK-5579][SQL][DataFrame] Support for project/filter using SQL expressions |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 22:15:35 -0800 |
| Commit: 40c4cb2, github.com/apache/spark/pull/4348 |
| |
| [FIX][MLLIB] fix seed handling in Python GMM |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-03 20:39:11 -0800 |
| Commit: eb15631, github.com/apache/spark/pull/4349 |
| |
| [SPARK-4795][Core] Redesign the "primitive type => Writable" implicit APIs to make them be activated automatically |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-03 20:17:12 -0800 |
| Commit: d37978d, github.com/apache/spark/pull/3642 |
| |
| [SPARK-5578][SQL][DataFrame] Provide a convenient way for Scala users to use UDFs |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 20:07:46 -0800 |
| Commit: 1077f2e, github.com/apache/spark/pull/4345 |
| |
| [SPARK-5520][MLlib] Make FP-Growth implementation take generic item types (WIP) |
| Jacky Li <jacky.likun@huawei.com>, Jacky Li <jackylk@users.noreply.github.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-03 17:02:42 -0800 |
| Commit: e380d2d, github.com/apache/spark/pull/4340 |
| |
| [SPARK-5554] [SQL] [PySpark] add more tests for DataFrame Python API |
| Davies Liu <davies@databricks.com> |
| 2015-02-03 16:01:56 -0800 |
| Commit: 068c0e2, github.com/apache/spark/pull/4331 |
| |
| [STREAMING] SPARK-4986 Wait for receivers to deregister and receiver job to terminate |
| Jesper Lundgren <jesper.lundgren@vpon.com> |
| 2015-02-03 14:53:39 -0800 |
| Commit: 1e8b539, github.com/apache/spark/pull/4338 |
| |
| [SPARK-5153][Streaming][Test] Increased timeout to deal with flaky KafkaStreamSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-03 13:46:02 -0800 |
| Commit: 681f9df, github.com/apache/spark/pull/4342 |
| |
| [SPARK-4508] [SQL] build native date type to conform behavior to Hive |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-03 12:21:45 -0800 |
| Commit: db821ed, github.com/apache/spark/pull/4325 |
| |
| [SPARK-5383][SQL] Support alias for udtfs |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-02-03 12:16:31 -0800 |
| Commit: 5adbb39, github.com/apache/spark/pull/4186 |
| |
| [SPARK-5550] [SQL] Support the case insensitive for UDF |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-03 12:12:26 -0800 |
| Commit: ca7a6cd, github.com/apache/spark/pull/4326 |
| |
| [SPARK-4987] [SQL] parquet timestamp type support |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-03 12:06:06 -0800 |
| Commit: 0c20ce6, github.com/apache/spark/pull/3820 |
| |
| |
| Release 1.3.1 |
| |
| [SQL] Use path.makeQualified in newParquet. |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-04 23:26:10 +0800 |
| Commit: eb57d4f, github.com/apache/spark/pull/5353 |
| |
| [SPARK-6700] disable flaky test |
| Davies Liu <davies@databricks.com> |
| 2015-04-03 15:22:21 -0700 |
| Commit: 3366af6, github.com/apache/spark/pull/5356 |
| |
| [SPARK-6688] [core] Always use resolved URIs in EventLoggingListener. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-03 11:54:31 -0700 |
| Commit: f17a2fe, github.com/apache/spark/pull/5340 |
| |
| [SPARK-6575][SQL] Converted Parquet Metastore tables no longer cache metadata |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-03 14:40:36 +0800 |
| Commit: 0c1b78b, github.com/apache/spark/pull/5339 |
| |
| [SPARK-6621][Core] Fix the bug that calling EventLoop.stop in EventLoop.onReceive/onError/onStart doesn't call onStop |
| zsxwing <zsxwing@gmail.com> |
| 2015-04-02 22:54:30 -0700 |
| Commit: ac705aa, github.com/apache/spark/pull/5280 |
| |
| [SPARK-6345][STREAMING][MLLIB] Fix for training with prediction |
| freeman <the.freeman.lab@gmail.com> |
| 2015-04-02 21:37:44 -0700 |
| Commit: d21f779, github.com/apache/spark/pull/5037 |
| |
| [CORE] The descriptionof jobHistory config should be spark.history.fs.logDirectory |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-04-02 20:24:31 -0700 |
| Commit: 17ab6b0, github.com/apache/spark/pull/5332 |
| |
| [SPARK-6575][SQL] Converted Parquet Metastore tables no longer cache metadata |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-02 20:23:08 -0700 |
| Commit: 0c1c0fb, github.com/apache/spark/pull/5339 |
| |
| [SPARK-6650] [core] Stop ExecutorAllocationManager when context stops. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-02 19:48:55 -0700 |
| Commit: 0ef46b2, github.com/apache/spark/pull/5311 |
| |
| [SPARK-6686][SQL] Use resolved output instead of names for toDF rename |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-02 18:30:55 -0700 |
| Commit: 2927af1, github.com/apache/spark/pull/5337 |
| |
| [SPARK-6672][SQL] convert row to catalyst in createDataFrame(RDD[Row], ...) |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-02 17:57:01 +0800 |
| Commit: c2694bb, github.com/apache/spark/pull/5329 |
| |
| [SPARK-6618][SPARK-6669][SQL] Lock Hive metastore client correctly. |
| Yin Huai <yhuai@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-04-02 16:46:50 -0700 |
| Commit: e6ee95c, github.com/apache/spark/pull/5333 |
| |
| [Minor] [SQL] Follow-up of PR #5210 |
| Cheng Lian <lian@databricks.com> |
| 2015-04-02 16:15:34 -0700 |
| Commit: 4f1fe3f, github.com/apache/spark/pull/5219 |
| |
| [SPARK-6655][SQL] We need to read the schema of a data source table stored in spark.sql.sources.schema property |
| Yin Huai <yhuai@databricks.com> |
| 2015-04-02 16:02:31 -0700 |
| Commit: aecec07, github.com/apache/spark/pull/5313 |
| |
| [SQL] Throw UnsupportedOperationException instead of NotImplementedError |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-02 16:01:03 -0700 |
| Commit: 78ba245, github.com/apache/spark/pull/5315 |
| |
| SPARK-6414: Spark driver failed with NPE on job cancelation |
| Hung Lin <hung.lin@gmail.com> |
| 2015-04-02 14:01:43 -0700 |
| Commit: 58e2b3f, github.com/apache/spark/pull/5124 |
| |
| [SPARK-6079] Use index to speed up StatusTracker.getJobIdsForGroup() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-25 17:40:00 -0700 |
| Commit: a6664dc, github.com/apache/spark/pull/4830 |
| |
| [SPARK-6667] [PySpark] remove setReuseAddress |
| Davies Liu <davies@databricks.com> |
| 2015-04-02 12:18:33 -0700 |
| Commit: ee2bd70, github.com/apache/spark/pull/5324 |
| |
| Revert "[SPARK-6618][SQL] HiveMetastoreCatalog.lookupRelation should use fine-grained lock" |
| Cheng Lian <lian@databricks.com> |
| 2015-04-02 12:59:38 +0800 |
| Commit: 1160cc9 |
| |
| [SQL] SPARK-6658: Update DataFrame documentation to refer to correct types |
| Michael Armbrust <michael@databricks.com> |
| 2015-04-01 18:00:07 -0400 |
| Commit: 223dd3f |
| |
| [SPARK-6578] Small rewrite to make the logic more clear in MessageWithHeader.transferTo. |
| Reynold Xin <rxin@databricks.com> |
| 2015-04-01 18:36:06 -0700 |
| Commit: d697b76, github.com/apache/spark/pull/5319 |
| |
| [SPARK-6660][MLLIB] pythonToJava doesn't recognize object arrays |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 18:17:07 -0700 |
| Commit: 0d1e476, github.com/apache/spark/pull/5318 |
| |
| [SPARK-6553] [pyspark] Support functools.partial as UDF |
| ksonj <kson@siberie.de> |
| 2015-04-01 17:23:57 -0700 |
| Commit: 98f72df, github.com/apache/spark/pull/5206 |
| |
| [SPARK-6642][MLLIB] use 1.2 lambda scaling and remove addImplicit from NormalEquation |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 16:47:18 -0700 |
| Commit: bc04fa2, github.com/apache/spark/pull/5314 |
| |
| [SPARK-6578] [core] Fix thread-safety issue in outbound path of network library. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-04-01 16:06:11 -0700 |
| Commit: 1c31ebd, github.com/apache/spark/pull/5234 |
| |
| [SPARK-6657] [Python] [Docs] fixed python doc build warnings |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-04-01 15:15:47 -0700 |
| Commit: e347a7a, github.com/apache/spark/pull/5317 |
| |
| [SPARK-6651][MLLIB] delegate dense vector arithmetics to the underlying numpy array |
| Xiangrui Meng <meng@databricks.com> |
| 2015-04-01 13:29:04 -0700 |
| Commit: f50d95a, github.com/apache/spark/pull/5312 |
| |
| SPARK-6626 [DOCS]: Corrected Scala:TwitterUtils parameters |
| jayson <jayson@ziprecruiter.com> |
| 2015-04-01 11:12:55 +0100 |
| Commit: 7d029cb, github.com/apache/spark/pull/5295 |
| |
| [Doc] Improve Python DataFrame documentation |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 18:31:36 -0700 |
| Commit: e527b35, github.com/apache/spark/pull/5287 |
| |
| [SPARK-6614] OutputCommitCoordinator should clear authorized committer only after authorized committer fails, not after any failure |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-31 16:18:39 -0700 |
| Commit: c4c982a, github.com/apache/spark/pull/5276 |
| |
| [SPARK-6633][SQL] Should be "Contains" instead of "EndsWith" when constructing sources.StringContains |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-31 13:18:07 -0700 |
| Commit: d851646, github.com/apache/spark/pull/5299 |
| |
| [SPARK-5371][SQL] Propagate types after function conversion, before futher resolution |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-31 11:34:29 -0700 |
| Commit: 5a957fe, github.com/apache/spark/pull/5278 |
| |
| [SPARK-6145][SQL] fix ORDER BY on nested fields |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-31 11:23:18 -0700 |
| Commit: 045228f, github.com/apache/spark/pull/5189 |
| |
| [SPARK-6575] [SQL] Adds configuration to disable schema merging while converting metastore Parquet tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 11:21:15 -0700 |
| Commit: 778c876, github.com/apache/spark/pull/5231 |
| |
| [SPARK-6555] [SQL] Overrides equals() and hashCode() for MetastoreRelation |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 11:18:25 -0700 |
| Commit: 9ebefb1, github.com/apache/spark/pull/5289 |
| |
| [SPARK-6618][SQL] HiveMetastoreCatalog.lookupRelation should use fine-grained lock |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-31 16:28:40 +0800 |
| Commit: fd600ce, github.com/apache/spark/pull/5281 |
| |
| [SPARK-6623][SQL] Alias DataFrame.na.drop and DataFrame.na.fill in Python. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 00:25:23 -0700 |
| Commit: cf651a4, github.com/apache/spark/pull/5284 |
| |
| [SPARK-6625][SQL] Add common string filters to data sources. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-31 00:19:51 -0700 |
| Commit: a97d4e6, github.com/apache/spark/pull/5285 |
| |
| [SPARK-6119][SQL] DataFrame support for missing data handling |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-30 20:47:10 -0700 |
| Commit: 67c885e, github.com/apache/spark/pull/5274 |
| |
| [SPARK-6369] [SQL] Uses commit coordinator to help committing Hive and Parquet tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-31 07:48:37 +0800 |
| Commit: fedbfc7, github.com/apache/spark/pull/5139 |
| |
| [SPARK-6603] [PySpark] [SQL] add SQLContext.udf and deprecate inferSchema() and applySchema |
| Davies Liu <davies@databricks.com> |
| 2015-03-30 15:47:00 -0700 |
| Commit: 30e7c63, github.com/apache/spark/pull/5273 |
| |
| [SPARK-6592][SQL] fix filter for scaladoc to generate API doc for Row class under catalyst dir |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-03-30 11:54:44 -0700 |
| Commit: f9d4efa, github.com/apache/spark/pull/5252 |
| |
| [SPARK-6571][MLLIB] use wrapper in MatrixFactorizationModel.load |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-28 15:08:05 -0700 |
| Commit: 93a7166, github.com/apache/spark/pull/5243 |
| |
| [SPARK-6595][SQL] MetastoreRelation should be a MultiInstanceRelation |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-30 22:24:12 +0800 |
| Commit: c411530, github.com/apache/spark/pull/5251 |
| |
| [SPARK-6558] Utils.getCurrentUserName returns the full principal name instead of login name |
| Thomas Graves <tgraves@apache.org> |
| 2015-03-29 12:43:30 +0100 |
| Commit: f8132de, github.com/apache/spark/pull/5229 |
| |
| [SPARK-5750][SPARK-3441][SPARK-5836][CORE] Added documentation explaining shuffle |
| Ilya Ganelin <ilya.ganelin@capitalone.com>, Ilya Ganelin <ilganeli@gmail.com> |
| 2015-03-30 11:52:02 +0100 |
| Commit: 1c59a4b, github.com/apache/spark/pull/5074 |
| |
| [spark-sql] a better exception message than "scala.MatchError" for unsupported types in Schema creation |
| Eran Medan <ehrann.mehdan@gmail.com> |
| 2015-03-30 00:02:52 -0700 |
| Commit: 4859c40, github.com/apache/spark/pull/5235 |
| |
| [HOTFIX] Build break due to NoRelation cherry-pick. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-29 12:07:28 -0700 |
| Commit: 6181366 |
| |
| [DOC] Improvements to Python docs. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-28 23:59:27 -0700 |
| Commit: 3db0844, github.com/apache/spark/pull/5238 |
| |
| [SPARK-6538][SQL] Add missing nullable Metastore fields when merging a Parquet schema |
| Adam Budde <budde@amazon.com> |
| 2015-03-28 09:14:09 +0800 |
| Commit: 5e04f45, github.com/apache/spark/pull/5214 |
| |
| [SPARK-6564][SQL] SQLContext.emptyDataFrame should contain 0 row, not 1 row |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-27 14:56:57 -0700 |
| Commit: 7006858, github.com/apache/spark/pull/5226 |
| |
| [SPARK-6544][build] Increment Avro version from 1.7.6 to 1.7.7 |
| Dean Chen <deanchen5@gmail.com> |
| 2015-03-27 14:32:51 +0000 |
| Commit: fefd49f, github.com/apache/spark/pull/5193 |
| |
| [SPARK-6574] [PySpark] fix sql example |
| Davies Liu <davies@databricks.com> |
| 2015-03-27 11:42:26 -0700 |
| Commit: b902a95, github.com/apache/spark/pull/5230 |
| |
| [SPARK-6550][SQL] Use analyzed plan in DataFrame |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-27 11:40:00 -0700 |
| Commit: bc75189, github.com/apache/spark/pull/5217 |
| |
| [SPARK-6341][mllib] Upgrade breeze from 0.11.1 to 0.11.2 |
| Yu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2015-03-27 00:15:02 -0700 |
| Commit: b318858, github.com/apache/spark/pull/5222 |
| |
| [DOCS][SQL] Fix JDBC example |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-26 14:51:46 -0700 |
| Commit: 54d92b5, github.com/apache/spark/pull/5192 |
| |
| [SPARK-6554] [SQL] Don't push down predicates which reference partition column(s) |
| Cheng Lian <lian@databricks.com> |
| 2015-03-26 13:11:37 -0700 |
| Commit: 3d54578, github.com/apache/spark/pull/5210 |
| |
| [SPARK-6117] [SQL] Improvements to DataFrame.describe() |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-26 12:26:13 -0700 |
| Commit: 28e3a1e, github.com/apache/spark/pull/5201 |
| |
| [SPARK-6117] [SQL] add describe function to DataFrame for summary statis... |
| azagrebin <azagrebin@gmail.com> |
| 2015-03-26 00:25:04 -0700 |
| Commit: 84735c3, github.com/apache/spark/pull/5073 |
| |
| SPARK-6480 [CORE] histogram() bucket function is wrong in some simple edge cases |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-26 15:00:23 +0000 |
| Commit: aa2d157, github.com/apache/spark/pull/5148 |
| |
| [SPARK-6491] Spark will put the current working dir to the CLASSPATH |
| guliangliang <guliangliang@qiyi.com> |
| 2015-03-26 13:28:56 +0000 |
| Commit: 5b5f0e2, github.com/apache/spark/pull/5156 |
| |
| [SQL][SPARK-6471]: Metastore schema should only be a subset of parquet schema to support dropping of columns using replace columns |
| Yash Datta <Yash.Datta@guavus.com> |
| 2015-03-26 21:13:38 +0800 |
| Commit: 836c921, github.com/apache/spark/pull/5141 |
| |
| [SPARK-6465][SQL] Fix serialization of GenericRowWithSchema using kryo |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-26 18:46:57 +0800 |
| Commit: 8254996, github.com/apache/spark/pull/5191 |
| |
| [SPARK-6536] [PySpark] Column.inSet() in Python |
| Davies Liu <davies@databricks.com> |
| 2015-03-26 00:01:24 -0700 |
| Commit: 0ba7599, github.com/apache/spark/pull/5190 |
| |
| [SPARK-6463][SQL] AttributeSet.equal should compare size |
| sisihj <jun.hejun@huawei.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-25 19:21:54 -0700 |
| Commit: 9edb34f, github.com/apache/spark/pull/5194 |
| |
| [SPARK-6450] [SQL] Fixes metastore Parquet table conversion |
| Cheng Lian <lian@databricks.com> |
| 2015-03-25 17:40:19 -0700 |
| Commit: 0cd4748, github.com/apache/spark/pull/5183 |
| |
| [SPARK-6409][SQL] It is not necessary that avoid old inteface of hive, because this will make some UDAF can not work. |
| DoingDone9 <799203320@qq.com> |
| 2015-03-25 11:11:52 -0700 |
| Commit: 4efa6c5, github.com/apache/spark/pull/5131 |
| |
| SPARK-6063 MLlib doesn't pass mvn scalastyle check due to UTF chars in LDAModel.scala |
| Michael Griffiths <msjgriffiths@gmail.com>, Griffiths, Michael (NYC-RPM) <michael.griffiths@reprisemedia.com> |
| 2015-02-28 14:47:39 +0000 |
| Commit: 6791f42, github.com/apache/spark/pull/4815 |
| |
| [SPARK-6496] [MLLIB] GeneralizedLinearAlgorithm.run(input, initialWeights) should initialize numFeatures |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-25 17:05:56 +0000 |
| Commit: 2be4255, github.com/apache/spark/pull/5167 |
| |
| [DOCUMENTATION]Fixed Missing Type Import in Documentation |
| Bill Chambers <wchambers@ischool.berkeley.edu>, anabranch <wac.chambers@gmail.com> |
| 2015-03-24 22:24:35 -0700 |
| Commit: 8e4e2e3, github.com/apache/spark/pull/5179 |
| |
| [SPARK-6469] Improving documentation on YARN local directories usage |
| Christophe PrƩaud <christophe.preaud@kelkoo.com> |
| 2015-03-24 17:05:49 -0700 |
| Commit: 6af9408, github.com/apache/spark/pull/5165 |
| |
| [SPARK-3570] Include time to open files in shuffle write time. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-03-24 16:29:40 -0700 |
| Commit: e4db5a3, github.com/apache/spark/pull/4550 |
| |
| [SPARK-6088] Correct how tasks that get remote results are shown in UI. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-03-24 16:26:43 -0700 |
| Commit: de8b2d4, github.com/apache/spark/pull/4839 |
| |
| [SPARK-6428][SQL] Added explicit types for all public methods in catalyst |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-24 16:03:55 -0700 |
| Commit: 586e0d9, github.com/apache/spark/pull/5162 |
| |
| [SPARK-6209] Clean up connections in ExecutorClassLoader after failing to load classes (master branch PR) |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-24 14:38:20 -0700 |
| Commit: dcf56aa, github.com/apache/spark/pull/4944 |
| |
| [SPARK-6458][SQL] Better error messages for invalid data sources |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 14:10:56 -0700 |
| Commit: f48c16d, github.com/apache/spark/pull/5158 |
| |
| [SPARK-6376][SQL] Avoid eliminating subqueries until optimization |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 14:08:20 -0700 |
| Commit: df671bc, github.com/apache/spark/pull/5160 |
| |
| [SPARK-6375][SQL] Fix formatting of error messages. |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 13:22:46 -0700 |
| Commit: 92bf888, github.com/apache/spark/pull/5155 |
| |
| Revert "[SPARK-5680][SQL] Sum function on all null values, should return zero" |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:32:25 -0700 |
| Commit: 930b667 |
| |
| [SPARK-6054][SQL] Fix transformations of TreeNodes that hold StructTypes |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:28:01 -0700 |
| Commit: c699e2b, github.com/apache/spark/pull/5157 |
| |
| [SPARK-6437][SQL] Use completion iterator to close external sorter |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:10:30 -0700 |
| Commit: c0101d3, github.com/apache/spark/pull/5161 |
| |
| [SPARK-6459][SQL] Warn when constructing trivially true equals predicate |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-24 12:09:02 -0700 |
| Commit: f0141ca, github.com/apache/spark/pull/5163 |
| |
| [SPARK-5955][MLLIB] add checkpointInterval to ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-20 15:02:57 -0400 |
| Commit: bc92a2e, github.com/apache/spark/pull/5076 |
| |
| [ML][docs][minor] Define LabeledDocument/Document classes in CV example |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-03-24 16:33:38 +0000 |
| Commit: 4ff5771, github.com/apache/spark/pull/5135 |
| |
| [SPARK-5559] [Streaming] [Test] Remove oppotunity we met flakiness when running FlumeStreamSuite |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-03-24 16:13:25 +0000 |
| Commit: 8722369, github.com/apache/spark/pull/4337 |
| |
| Update the command to use IPython notebook |
| Cong Yue <yuecong1104@gmail.com> |
| 2015-03-24 12:56:13 +0000 |
| Commit: e545143, github.com/apache/spark/pull/5111 |
| |
| [SPARK-6452] [SQL] Checks for missing attributes and unresolved operator for all types of operator |
| Cheng Lian <lian@databricks.com> |
| 2015-03-24 01:12:11 -0700 |
| Commit: 6f10142, github.com/apache/spark/pull/5129 |
| |
| [SPARK-6124] Support jdbc connection properties in OPTIONS part of the query |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-23 17:00:27 -0700 |
| Commit: 04b2078, github.com/apache/spark/pull/4859 |
| |
| [SPARK-6397][SQL] Check the missingInput simply |
| Yadong Qi <qiyadong2010@gmail.com> |
| 2015-03-23 18:16:49 +0800 |
| Commit: a29f493, github.com/apache/spark/pull/5132 |
| |
| [SPARK-4985] [SQL] parquet support for date type |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-23 11:46:16 +0800 |
| Commit: 60b9b96, github.com/apache/spark/pull/3822 |
| |
| [SPARK-6337][Documentation, SQL]Spark 1.3 doc fixes |
| vinodkc <vinod.kc.in@gmail.com> |
| 2015-03-22 20:00:08 +0000 |
| Commit: 857e8a6, github.com/apache/spark/pull/5112 |
| |
| SPARK-6454 [DOCS] Fix links to pyspark api |
| Kamil Smuga <smugakamil@gmail.com>, stderr <smugakamil@gmail.com> |
| 2015-03-22 15:56:25 +0000 |
| Commit: 3ba295f, github.com/apache/spark/pull/5120 |
| |
| [SPARK-6408] [SQL] Fix JDBCRDD filtering string literals |
| ypcat <ypcat6@gmail.com>, Pei-Lun Lee <pllee@appier.com> |
| 2015-03-22 15:49:13 +0800 |
| Commit: e60fbf6, github.com/apache/spark/pull/5087 |
| |
| [SPARK-6428][SQL] Added explicit type for all public methods for Hive module |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-21 14:30:04 -0700 |
| Commit: 0021d22, github.com/apache/spark/pull/5108 |
| |
| [SPARK-6428][SQL] Added explicit type for all public methods in sql/core |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-20 15:47:07 -0700 |
| Commit: c964588, github.com/apache/spark/pull/5104 |
| |
| [SPARK-6250][SPARK-6146][SPARK-5911][SQL] Types are now reserved words in DDL parser. |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-21 13:27:53 -0700 |
| Commit: 102daaf, github.com/apache/spark/pull/5078 |
| |
| [SPARK-5680][SQL] Sum function on all null values, should return zero |
| Venkata Ramana G <ramana.gollamudihuawei.com>, Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-03-21 13:24:24 -0700 |
| Commit: 93975a3, github.com/apache/spark/pull/4466 |
| |
| [SPARK-5320][SQL]Add statistics method at NoRelation (override super). |
| x1- <viva008@gmail.com> |
| 2015-03-21 13:22:34 -0700 |
| Commit: cba6842, github.com/apache/spark/pull/5105 |
| |
| [SPARK-5821] [SQL] JSON CTAS command should throw error message when delete path failure |
| Yanbo Liang <ybliang8@gmail.com>, Yanbo Liang <yanbohappy@gmail.com> |
| 2015-03-21 11:23:28 +0800 |
| Commit: 8de90c7, github.com/apache/spark/pull/4610 |
| |
| [SPARK-6315] [SQL] Also tries the case class string parser while reading Parquet schema |
| Cheng Lian <lian@databricks.com> |
| 2015-03-21 11:18:45 +0800 |
| Commit: b75943f, github.com/apache/spark/pull/5034 |
| |
| [SPARK-5821] [SQL] ParquetRelation2 CTAS should check if delete is successful |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-21 10:53:04 +0800 |
| Commit: df83e21, github.com/apache/spark/pull/5107 |
| |
| [SPARK-6421][MLLIB] _regression_train_wrapper does not test initialWeights correctly |
| lewuathe <lewuathe@me.com> |
| 2015-03-20 17:18:18 -0400 |
| Commit: aff9f8d, github.com/apache/spark/pull/5101 |
| |
| [SPARK-6286][Mesos][minor] Handle missing Mesos case TASK_ERROR |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-20 12:24:34 +0000 |
| Commit: db812d9, github.com/apache/spark/pull/5088 |
| |
| [SPARK-6222][Streaming] Dont delete checkpoint data when doing pre-batch-start checkpoint |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-19 02:15:50 -0400 |
| Commit: 03e263f, github.com/apache/spark/pull/5008 |
| |
| [SPARK-6325] [core,yarn] Do not change target executor count when killing executors. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-18 09:18:28 -0400 |
| Commit: 1723f05, github.com/apache/spark/pull/5018 |
| |
| [SPARK-6286][minor] Handle missing Mesos case TASK_ERROR. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-03-18 09:15:33 -0400 |
| Commit: ff0a7f4, github.com/apache/spark/pull/5000 |
| |
| [SPARK-6247][SQL] Fix resolution of ambiguous joins caused by new aliases |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-17 19:47:51 -0700 |
| Commit: ba8352c, github.com/apache/spark/pull/5062 |
| |
| [SPARK-6383][SQL]Fixed compiler and errors in Dataframe examples |
| Tijo Thomas <tijoparacka@gmail.com> |
| 2015-03-17 18:50:19 -0700 |
| Commit: cee6d08, github.com/apache/spark/pull/5068 |
| |
| [SPARK-6366][SQL] In Python API, the default save mode for save and saveAsTable should be "error" instead of "append". |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-18 09:41:06 +0800 |
| Commit: 3ea38bc, github.com/apache/spark/pull/5053 |
| |
| [SPARK-6330] [SQL] Add a test case for SPARK-6330 |
| Pei-Lun Lee <pllee@appier.com> |
| 2015-03-18 08:34:46 +0800 |
| Commit: 9d88f0c, github.com/apache/spark/pull/5039 |
| |
| [SPARK-6336] LBFGS should document what convergenceTol means |
| lewuathe <lewuathe@me.com> |
| 2015-03-17 12:11:57 -0700 |
| Commit: 476c4e1, github.com/apache/spark/pull/5033 |
| |
| [SPARK-6365] jetty-security also needed for SPARK_PREPEND_CLASSES to work |
| Imran Rashid <irashid@cloudera.com> |
| 2015-03-17 12:03:54 -0500 |
| Commit: ac0e7cc, github.com/apache/spark/pull/5071 |
| |
| [SPARK-6313] Add config option to disable file locks/fetchFile cache to ... |
| nemccarthy <nathan@nemccarthy.me> |
| 2015-03-17 09:33:11 -0700 |
| Commit: febb123, github.com/apache/spark/pull/5036 |
| |
| [SPARK-3266] Use intermediate abstract classes to fix type erasure issues in Java APIs |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-17 09:18:57 -0700 |
| Commit: 29e39e1, github.com/apache/spark/pull/5050 |
| |
| [SPARK-6331] Load new master URL if present when recovering streaming context from checkpoint |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-17 05:31:27 -0700 |
| Commit: 95f8d1c, github.com/apache/spark/pull/5024 |
| |
| [SQL][docs][minor] Fixed sample code in SQLContext scaladoc |
| Lomig MeĢgard <lomig.megard@gmail.com> |
| 2015-03-16 23:52:42 -0700 |
| Commit: 426816b, github.com/apache/spark/pull/5051 |
| |
| [SPARK-6299][CORE] ClassNotFoundException in standalone mode when running groupByKey with class defined in REPL |
| Kevin (Sangwoo) Kim <sangwookim.me@gmail.com> |
| 2015-03-16 23:49:23 -0700 |
| Commit: 5c16ced, github.com/apache/spark/pull/5046 |
| |
| [SPARK-6077] Remove streaming tab while stopping StreamingContext |
| lisurprise <zhichao.li@intel.com> |
| 2015-03-16 13:10:32 -0700 |
| Commit: 47cce98, github.com/apache/spark/pull/4828 |
| |
| [SPARK-6330] Fix filesystem bug in newParquet relation |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-16 12:13:18 -0700 |
| Commit: 67fa6d1, github.com/apache/spark/pull/5020 |
| |
| SPARK-6245 [SQL] jsonRDD() of empty RDD results in exception |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-11 14:09:09 +0000 |
| Commit: 684ff24, github.com/apache/spark/pull/4971 |
| |
| [SPARK-6300][Spark Core] sc.addFile(path) does not support the relative path. |
| DoingDone9 <799203320@qq.com> |
| 2015-03-16 12:27:15 +0000 |
| Commit: 724aab4, github.com/apache/spark/pull/4993 |
| |
| [SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688 |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-03-15 15:46:55 +0000 |
| Commit: 43fcab0, github.com/apache/spark/pull/4361 |
| |
| [SPARK-6210] [SQL] use prettyString as column name in agg() |
| Davies Liu <davies@databricks.com> |
| 2015-03-14 00:43:33 -0700 |
| Commit: ad47563, github.com/apache/spark/pull/5006 |
| |
| [SPARK-6275][Documentation]Miss toDF() function in docs/sql-programming-guide.md |
| zzcclp <xm_zzc@sina.com> |
| 2015-03-12 15:07:15 +0000 |
| Commit: 3012781, github.com/apache/spark/pull/4977 |
| |
| [SPARK-6133] Make sc.stop() idempotent |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 15:09:57 -0800 |
| Commit: a08588c, github.com/apache/spark/pull/4871 |
| |
| [SPARK-6132][HOTFIX] ContextCleaner InterruptedException should be quiet |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 20:49:45 -0800 |
| Commit: 338bea7, github.com/apache/spark/pull/4882 |
| |
| [SPARK-6132] ContextCleaner race condition across SparkContexts |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 13:44:05 -0800 |
| Commit: 3cdc8a3, github.com/apache/spark/pull/4869 |
| |
| [SPARK-6087][CORE] Provide actionable exception if Kryo buffer is not large enough |
| Lev Khomich <levkhomich@gmail.com> |
| 2015-03-10 10:55:42 +0000 |
| Commit: 9846790, github.com/apache/spark/pull/4947 |
| |
| [SPARK-6036][CORE] avoid race condition between eventlogListener and akka actor system |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-02-26 23:11:43 -0800 |
| Commit: f81611d, github.com/apache/spark/pull/4785 |
| |
| SPARK-4044 [CORE] Thriftserver fails to start when JAVA_HOME points to JRE instead of JDK |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-13 17:59:31 +0000 |
| Commit: 4aa4132, github.com/apache/spark/pull/4981 |
| |
| SPARK-4300 [CORE] Race condition during SparkWorker shutdown |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 14:08:56 -0800 |
| Commit: a3493eb, github.com/apache/spark/pull/4787 |
| |
| [SPARK-6194] [SPARK-677] [PySpark] fix memory leak in collect() |
| Davies Liu <davies@databricks.com> |
| 2015-03-09 16:24:06 -0700 |
| Commit: 170af49, github.com/apache/spark/pull/4923 |
| |
| SPARK-4704 [CORE] SparkSubmitDriverBootstrap doesn't flush output |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 12:56:54 -0800 |
| Commit: dbee7e1, github.com/apache/spark/pull/4788 |
| |
| [SPARK-6278][MLLIB] Mention the change of objective in linear regression |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-13 10:27:28 -0700 |
| Commit: 214f681, github.com/apache/spark/pull/4978 |
| |
| [SPARK-5310] [SQL] [DOC] Parquet section for the SQL programming guide |
| Cheng Lian <lian@databricks.com> |
| 2015-03-13 21:34:50 +0800 |
| Commit: dc287f3, github.com/apache/spark/pull/5001 |
| |
| [mllib] [python] Add LassoModel to __all__ in regression.py |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-03-12 16:46:29 -0700 |
| Commit: 23069bd, github.com/apache/spark/pull/4970 |
| |
| [SPARK-6294] fix hang when call take() in JVM on PythonRDD |
| Davies Liu <davies@databricks.com> |
| 2015-03-12 01:34:38 -0700 |
| Commit: 850e694, github.com/apache/spark/pull/4987 |
| |
| [SPARK-6296] [SQL] Added equals to Column |
| Volodymyr Lyubinets <vlyubin@gmail.com> |
| 2015-03-12 00:55:26 -0700 |
| Commit: d9e141c, github.com/apache/spark/pull/4988 |
| |
| [SPARK-6128][Streaming][Documentation] Updates to Spark Streaming Programming Guide |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-11 18:48:21 -0700 |
| Commit: bdc4682, github.com/apache/spark/pull/4956 |
| |
| [SPARK-6274][Streaming][Examples] Added examples streaming + sql examples. |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-11 11:19:51 -0700 |
| Commit: ac61466, github.com/apache/spark/pull/4975 |
| |
| [SPARK-5183][SQL] Update SQL Docs with JDBC and Migration Guide |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-10 18:13:09 -0700 |
| Commit: edbcb6f, github.com/apache/spark/pull/4958 |
| |
| Minor doc: Remove the extra blank line in data types javadoc. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-10 17:25:04 -0700 |
| Commit: 7295192, github.com/apache/spark/pull/4955 |
| |
| [SPARK-5310][Doc] Update SQL Programming Guide to include DataFrames. |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-09 16:16:16 -0700 |
| Commit: bc53d3d, github.com/apache/spark/pull/4954 |
| |
| [Docs] Replace references to SchemaRDD with DataFrame |
| Reynold Xin <rxin@databricks.com> |
| 2015-03-09 13:29:19 -0700 |
| Commit: 5e58f76, github.com/apache/spark/pull/4952 |
| |
| Preparing development version 1.3.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-05 23:02:08 +0000 |
| Commit: c152f9a |
| |
| |
| Release 1.3.0 |
| |
| [SQL] Make Strategies a public developer API |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-05 14:50:25 -0800 |
| Commit: 556e0de, github.com/apache/spark/pull/4920 |
| |
| [SPARK-6163][SQL] jsonFile should be backed by the data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-05 14:49:44 -0800 |
| Commit: 083fed5, github.com/apache/spark/pull/4896 |
| |
| [SPARK-6145][SQL] fix ORDER BY on nested fields |
| Wenchen Fan <cloud0fan@outlook.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-05 14:49:01 -0800 |
| Commit: e358f55, github.com/apache/spark/pull/4918 |
| |
| [SPARK-6175] Fix standalone executor log links when ephemeral ports or SPARK_PUBLIC_DNS are used |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-03-05 12:04:00 -0800 |
| Commit: 988b498, github.com/apache/spark/pull/4903 |
| |
| SPARK-6182 [BUILD] spark-parent pom needs to be published for both 2.10 and 2.11 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-05 11:31:48 -0800 |
| Commit: ae315d2, github.com/apache/spark/pull/4912 |
| |
| Revert "[SPARK-6153] [SQL] promote guava dep for hive-thriftserver" |
| Cheng Lian <lian@databricks.com> |
| 2015-03-05 17:58:18 +0800 |
| Commit: f8205d3 |
| |
| [SPARK-6153] [SQL] promote guava dep for hive-thriftserver |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-03-05 16:35:17 +0800 |
| Commit: b92d925, github.com/apache/spark/pull/4884 |
| |
| Updating CHANGES file |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-04 21:19:49 -0800 |
| Commit: 87eac3c |
| |
| SPARK-5143 [BUILD] [WIP] spark-network-yarn 2.11 depends on spark-network-shuffle 2.10 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-04 21:00:51 -0800 |
| Commit: f509159, github.com/apache/spark/pull/4876 |
| |
| [SPARK-6149] [SQL] [Build] Excludes Guava 15 referenced by jackson-module-scala_2.10 |
| Cheng Lian <lian@databricks.com> |
| 2015-03-04 20:52:58 -0800 |
| Commit: a0aa24a, github.com/apache/spark/pull/4890 |
| |
| [SPARK-6144] [core] Fix addFile when source files are on "hdfs:" |
| Marcelo Vanzin <vanzin@cloudera.com>, trystanleftwich <trystan@atscale.com> |
| 2015-03-04 12:58:39 -0800 |
| Commit: 3fc74f4, github.com/apache/spark/pull/4894 |
| |
| [SPARK-6134][SQL] Fix wrong datatype for casting FloatType and default LongType value in defaultPrimitive |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-04 20:23:43 +0800 |
| Commit: bfa4e31, github.com/apache/spark/pull/4870 |
| |
| [SPARK-6136] [SQL] Removed JDBC integration tests which depends on docker-client |
| Cheng Lian <lian@databricks.com> |
| 2015-03-04 19:39:02 +0800 |
| Commit: 035243d, github.com/apache/spark/pull/4872 |
| |
| [SPARK-6141][MLlib] Upgrade Breeze from 0.10 to 0.11 to fix convergence bug |
| Xiangrui Meng <meng@databricks.com>, DB Tsai <dbtsai@alpinenow.com>, DB Tsai <dbtsai@dbtsai.com> |
| 2015-03-03 23:52:02 -0800 |
| Commit: 9f24977, github.com/apache/spark/pull/4879 |
| |
| [SPARK-5949] HighlyCompressedMapStatus needs more classes registered w/ kryo |
| Imran Rashid <irashid@cloudera.com> |
| 2015-03-03 15:33:19 -0800 |
| Commit: 9a0b75c, github.com/apache/spark/pull/4877 |
| |
| SPARK-1911 [DOCS] Warn users if their assembly jars are not built with Java 6 |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-03 13:40:11 -0800 |
| Commit: 8446ad0, github.com/apache/spark/pull/4874 |
| |
| Revert "[SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure deleting the temp file" |
| Andrew Or <andrew@databricks.com> |
| 2015-03-03 13:04:15 -0800 |
| Commit: ee4929d |
| |
| Adding CHANGES.txt for Spark 1.3 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-03 02:19:19 -0800 |
| Commit: ce7158c |
| |
| BUILD: Minor tweaks to internal build scripts |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-03 00:38:12 -0800 |
| Commit: ae60eb9 |
| |
| HOTFIX: Bump HBase version in MapR profiles. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-03-03 01:38:07 -0800 |
| Commit: 1aa8461 |
| |
| [SPARK-5537][MLlib][Docs] Add user guide for multinomial logistic regression |
| DB Tsai <dbtsai@alpinenow.com> |
| 2015-03-02 22:37:12 -0800 |
| Commit: 841d2a2, github.com/apache/spark/pull/4866 |
| |
| [SPARK-6120] [mllib] Warnings about memory in tree, ensemble model save |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-03-02 22:33:51 -0800 |
| Commit: 81648a7, github.com/apache/spark/pull/4864 |
| |
| [SPARK-6097][MLLIB] Support tree model save/load in PySpark/MLlib |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-02 22:27:01 -0800 |
| Commit: 62c53be, github.com/apache/spark/pull/4854 |
| |
| [SPARK-5310][SQL] Fixes to Docs and Datasources API |
| Reynold Xin <rxin@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-03-02 22:14:08 -0800 |
| Commit: 4e6e008, github.com/apache/spark/pull/4868 |
| |
| [SPARK-5950][SQL]Insert array into a metastore table saved as parquet should work when using datasource api |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 19:31:55 -0800 |
| Commit: 1b490e9, github.com/apache/spark/pull/4826 |
| |
| [SPARK-6127][Streaming][Docs] Add Kafka to Python api docs |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-03-02 18:40:46 -0800 |
| Commit: ffd0591, github.com/apache/spark/pull/4860 |
| |
| [SPARK-5537] Add user guide for multinomial logistic regression |
| Xiangrui Meng <meng@databricks.com>, DB Tsai <dbtsai@alpinenow.com> |
| 2015-03-02 18:10:50 -0800 |
| Commit: 11389f0, github.com/apache/spark/pull/4801 |
| |
| [SPARK-6121][SQL][MLLIB] simpleString for UDT |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-02 17:14:34 -0800 |
| Commit: 1b8ab57, github.com/apache/spark/pull/4858 |
| |
| [SPARK-6048] SparkConf should not translate deprecated configs on set |
| Andrew Or <andrew@databricks.com> |
| 2015-03-02 16:36:42 -0800 |
| Commit: ea69cf2, github.com/apache/spark/pull/4799 |
| |
| [SPARK-6066] Make event log format easier to parse |
| Andrew Or <andrew@databricks.com> |
| 2015-03-02 16:34:32 -0800 |
| Commit: 8100b79, github.com/apache/spark/pull/4821 |
| |
| [SPARK-6082] [SQL] Provides better error message for malformed rows when caching tables |
| Cheng Lian <lian@databricks.com> |
| 2015-03-02 16:18:00 -0800 |
| Commit: 866f281, github.com/apache/spark/pull/4842 |
| |
| [SPARK-6114][SQL] Avoid metastore conversions before plan is resolved |
| Michael Armbrust <michael@databricks.com> |
| 2015-03-02 16:10:54 -0800 |
| Commit: 3899c7c, github.com/apache/spark/pull/4855 |
| |
| [SPARK-6050] [yarn] Relax matching of vcore count in received containers. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-02 16:41:43 -0600 |
| Commit: 650d1e7, github.com/apache/spark/pull/4818 |
| |
| [SPARK-6040][SQL] Fix the percent bug in tablesample |
| q00251598 <qiyadong@huawei.com> |
| 2015-03-02 13:16:29 -0800 |
| Commit: a83b9bb, github.com/apache/spark/pull/4789 |
| |
| [Minor] Fix doc typo for describing primitiveTerm effectiveness condition |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-03-02 13:11:17 -0800 |
| Commit: f92876a, github.com/apache/spark/pull/4762 |
| |
| SPARK-5390 [DOCS] Encourage users to post on Stack Overflow in Community Docs |
| Sean Owen <sowen@cloudera.com> |
| 2015-03-02 21:10:08 +0000 |
| Commit: 58e7198, github.com/apache/spark/pull/4843 |
| |
| [DOCS] Refactored Dataframe join comment to use correct parameter ordering |
| Paul Power <paul.power@peerside.com> |
| 2015-03-02 13:08:47 -0800 |
| Commit: 54ac243, github.com/apache/spark/pull/4847 |
| |
| [SPARK-6080] [PySpark] correct LogisticRegressionWithLBFGS regType parameter for pyspark |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-03-02 10:17:24 -0800 |
| Commit: 4ffaf85, github.com/apache/spark/pull/4831 |
| |
| [SPARK-5741][SQL] Support the path contains comma in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-03-02 10:13:11 -0800 |
| Commit: f476108, github.com/apache/spark/pull/4532 |
| |
| [SPARK-6111] Fixed usage string in documentation. |
| Kenneth Myers <myerske@us.ibm.com> |
| 2015-03-02 17:25:24 +0000 |
| Commit: b2b7f01, github.com/apache/spark/pull/4852 |
| |
| [SPARK-6052][SQL]In JSON schema inference, we should always set containsNull of an ArrayType to true |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 23:18:07 +0800 |
| Commit: a3fef2c, github.com/apache/spark/pull/4806 |
| |
| [SPARK-6073][SQL] Need to refresh metastore cache after append data in CreateMetastoreDataSourceAsSelect |
| Yin Huai <yhuai@databricks.com> |
| 2015-03-02 22:42:18 +0800 |
| Commit: c59871c, github.com/apache/spark/pull/4824 |
| |
| [Streaming][Minor]Fix some error docs in streaming examples |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-03-02 08:49:19 +0000 |
| Commit: 1fe677a, github.com/apache/spark/pull/4837 |
| |
| [SPARK-6083] [MLLib] [DOC] Make Python API example consistent in NaiveBayes |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-03-01 16:28:15 -0800 |
| Commit: 6a2fc85, github.com/apache/spark/pull/4834 |
| |
| [SPARK-6053][MLLIB] support save/load in PySpark's ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-03-01 16:26:57 -0800 |
| Commit: b570d98, github.com/apache/spark/pull/4811 |
| |
| [SPARK-6074] [sql] Package pyspark sql bindings. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-03-01 11:05:10 +0000 |
| Commit: bb16618, github.com/apache/spark/pull/4822 |
| |
| SPARK-5984: Fix TimSort bug causes ArrayOutOfBoundsException |
| Evan Yu <ehotou@gmail.com> |
| 2015-02-28 18:55:34 -0800 |
| Commit: 317694c, github.com/apache/spark/pull/4804 |
| |
| [SPARK-5775] [SQL] BugFix: GenericRow cannot be cast to SpecificMutableRow when nested data and partitioned table |
| Cheng Lian <lian@databricks.com>, Cheng Lian <liancheng@users.noreply.github.com>, Yin Huai <yhuai@databricks.com> |
| 2015-02-28 21:15:43 +0800 |
| Commit: aa39460, github.com/apache/spark/pull/4792 |
| |
| [SPARK-5979][SPARK-6032] Smaller safer --packages fix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-27 22:59:35 -0800 |
| Commit: 5a55c96, github.com/apache/spark/pull/4802 |
| |
| [SPARK-6070] [yarn] Remove unneeded classes from shuffle service jar. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-27 22:44:11 -0800 |
| Commit: 1747e0a, github.com/apache/spark/pull/4820 |
| |
| [SPARK-6055] [PySpark] fix incorrect __eq__ of DataType |
| Davies Liu <davies@databricks.com> |
| 2015-02-27 20:07:17 -0800 |
| Commit: 49f2187, github.com/apache/spark/pull/4808 |
| |
| [SPARK-5751] [SQL] Sets SPARK_HOME as SPARK_PID_DIR when running Thrift server test suites |
| Cheng Lian <lian@databricks.com> |
| 2015-02-28 08:41:49 +0800 |
| Commit: 5d19cf0, github.com/apache/spark/pull/4758 |
| |
| [Streaming][Minor] Remove useless type signature of Java Kafka direct stream API |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-02-27 13:01:42 -0800 |
| Commit: ceebe3c, github.com/apache/spark/pull/4817 |
| |
| [SPARK-4587] [mllib] [docs] Fixed save,load calls in ML guide examples |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-27 13:00:36 -0800 |
| Commit: 117e10c, github.com/apache/spark/pull/4816 |
| |
| [SPARK-6058][Yarn] Log the user class exception in ApplicationMaster |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-27 13:31:46 +0000 |
| Commit: bff8088, github.com/apache/spark/pull/4813 |
| |
| fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode |
| č®øé¹ <peng.xu@fraudmetrix.cn> |
| 2015-02-26 23:05:56 -0800 |
| Commit: b8db84c, github.com/apache/spark/pull/4803 |
| |
| SPARK-2168 [Spark core] Use relative URIs for the app links in the History Server. |
| Lukasz Jastrzebski <lukasz.jastrzebski@gmail.com> |
| 2015-02-26 22:38:06 -0800 |
| Commit: 485b919, github.com/apache/spark/pull/4778 |
| |
| [SPARK-6024][SQL] When a data source table has too many columns, it's schema cannot be stored in metastore. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-26 20:46:05 -0800 |
| Commit: 6200f07, github.com/apache/spark/pull/4795 |
| |
| [SPARK-6037][SQL] Avoiding duplicate Parquet schema merging |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-27 11:06:47 +0800 |
| Commit: 25a109e, github.com/apache/spark/pull/4786 |
| |
| SPARK-4579 [WEBUI] Scheduling Delay appears negative |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-26 17:35:09 -0800 |
| Commit: b83a93e, github.com/apache/spark/pull/4796 |
| |
| [SPARK-5951][YARN] Remove unreachable driver memory properties in yarn client mode |
| mohit.goyal <mohit.goyal@guavus.com> |
| 2015-02-26 14:27:47 -0800 |
| Commit: 5b426cb, github.com/apache/spark/pull/4730 |
| |
| Add a note for context termination for History server on Yarn |
| moussa taifi <moutai10@gmail.com> |
| 2015-02-26 14:19:43 -0800 |
| Commit: 297c3ef, github.com/apache/spark/pull/4721 |
| |
| [SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AM |
| Cheolsoo Park <cheolsoop@netflix.com> |
| 2015-02-26 13:53:49 -0800 |
| Commit: fe79674, github.com/apache/spark/pull/4773 |
| |
| [SPARK-6027][SPARK-5546] Fixed --jar and --packages not working for KafkaUtils and improved error message |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-26 13:46:07 -0800 |
| Commit: 731a997, github.com/apache/spark/pull/4779 |
| |
| Modify default value description for spark.scheduler.minRegisteredResourcesRatio on docs. |
| Li Zhihui <zhihui.li@intel.com> |
| 2015-02-26 13:07:07 -0800 |
| Commit: 62652dc, github.com/apache/spark/pull/4781 |
| |
| [SPARK-5363] Fix bug in PythonRDD: remove() inside iterator is not safe |
| Davies Liu <davies@databricks.com> |
| 2015-02-26 11:54:17 -0800 |
| Commit: 5d309ad, github.com/apache/spark/pull/4776 |
| |
| [SPARK-6015] fix links to source code in Python API docs |
| Davies Liu <davies@databricks.com> |
| 2015-02-26 10:45:29 -0800 |
| Commit: dafb3d2, github.com/apache/spark/pull/4772 |
| |
| [SPARK-6007][SQL] Add numRows param in DataFrame.show() |
| Jacky Li <jacky.likun@huawei.com> |
| 2015-02-26 10:40:58 -0800 |
| Commit: 7c779d8, github.com/apache/spark/pull/4767 |
| |
| [SPARK-6016][SQL] Cannot read the parquet table after overwriting the existing table when spark.sql.parquet.cacheMetadata=true |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-27 01:01:32 +0800 |
| Commit: b5c5e93, github.com/apache/spark/pull/4775 |
| |
| [SPARK-6023][SQL] ParquetConversions fails to replace the destination MetastoreRelation of an InsertIntoTable node to ParquetRelation2 |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-26 22:39:49 +0800 |
| Commit: e0f5fb0, github.com/apache/spark/pull/4782 |
| |
| [SPARK-5976][MLLIB] Add partitioner to factors returned by ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-25 23:43:29 -0800 |
| Commit: a51d9db, github.com/apache/spark/pull/4748 |
| |
| [SPARK-1182][Docs] Sort the configuration parameters in configuration.md |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-25 16:12:56 -0800 |
| Commit: 56fa38a, github.com/apache/spark/pull/3863 |
| |
| [SPARK-5724] fix the misconfiguration in AkkaUtils |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-02-23 11:29:25 +0000 |
| Commit: b32a653, github.com/apache/spark/pull/4512 |
| |
| [SPARK-5974] [SPARK-5980] [mllib] [python] [docs] Update ML guide with save/load, Python GBT |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-25 16:13:17 -0800 |
| Commit: a1b4856, github.com/apache/spark/pull/4750 |
| |
| [SPARK-5926] [SQL] make DataFrame.explain leverage queryExecution.logical |
| Yanbo Liang <ybliang8@gmail.com> |
| 2015-02-25 15:37:13 -0800 |
| Commit: 5bd4b49, github.com/apache/spark/pull/4707 |
| |
| [SPARK-5999][SQL] Remove duplicate Literal matching block |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-25 15:22:33 -0800 |
| Commit: 6fff9b8, github.com/apache/spark/pull/4760 |
| |
| [SPARK-6010] [SQL] Merging compatible Parquet schemas before computing splits |
| Cheng Lian <lian@databricks.com> |
| 2015-02-25 15:15:22 -0800 |
| Commit: 016f1f8, github.com/apache/spark/pull/4768 |
| |
| [SPARK-5944] [PySpark] fix version in Python API docs |
| Davies Liu <davies@databricks.com> |
| 2015-02-25 15:13:34 -0800 |
| Commit: 9aca3c6, github.com/apache/spark/pull/4731 |
| |
| [SPARK-5982] Remove incorrect Local Read Time Metric |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-25 14:55:24 -0800 |
| Commit: 791df93, github.com/apache/spark/pull/4749 |
| |
| [SPARK-1955][GraphX]: VertexRDD can incorrectly assume index sharing |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-25 14:11:12 -0800 |
| Commit: 8073767, github.com/apache/spark/pull/4705 |
| |
| SPARK-5930 [DOCS] Documented default of spark.shuffle.io.retryWait is confusing |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-25 12:20:44 -0800 |
| Commit: eaffc6e, github.com/apache/spark/pull/4769 |
| |
| [SPARK-5996][SQL] Fix specialized outbound conversions |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-25 10:13:40 -0800 |
| Commit: fada683, github.com/apache/spark/pull/4757 |
| |
| [SPARK-5994] [SQL] Python DataFrame documentation fixes |
| Davies Liu <davies@databricks.com> |
| 2015-02-24 20:51:55 -0800 |
| Commit: 5c421e0, github.com/apache/spark/pull/4756 |
| |
| [SPARK-5286][SQL] SPARK-5286 followup |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-24 19:51:36 -0800 |
| Commit: e7a748e, github.com/apache/spark/pull/4755 |
| |
| [SPARK-5993][Streaming][Build] Fix assembly jar location of kafka-assembly |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-24 19:10:37 -0800 |
| Commit: 1e94894, github.com/apache/spark/pull/4753 |
| |
| [SPARK-5985][SQL] DataFrame sortBy -> orderBy in Python. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-24 18:59:23 -0800 |
| Commit: 5e233b2, github.com/apache/spark/pull/4752 |
| |
| [SPARK-5904][SQL] DataFrame Java API test suites. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-24 18:51:41 -0800 |
| Commit: 78a1781, github.com/apache/spark/pull/4751 |
| |
| [SPARK-5751] [SQL] [WIP] Revamped HiveThriftServer2Suite for robustness |
| Cheng Lian <lian@databricks.com> |
| 2015-02-25 08:34:55 +0800 |
| Commit: 17ee246, github.com/apache/spark/pull/4720 |
| |
| [SPARK-5973] [PySpark] fix zip with two RDDs with AutoBatchedSerializer |
| Davies Liu <davies@databricks.com> |
| 2015-02-24 14:50:00 -0800 |
| Commit: 91bf0f8, github.com/apache/spark/pull/4745 |
| |
| [SPARK-5952][SQL] Lock when using hive metastore client |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 13:39:29 -0800 |
| Commit: 641423d, github.com/apache/spark/pull/4746 |
| |
| [MLLIB] Change x_i to y_i in Variance's user guide |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-24 11:38:59 -0800 |
| Commit: a4ff445, github.com/apache/spark/pull/4740 |
| |
| [SPARK-5965] Standalone Worker UI displays {{USER_JAR}} |
| Andrew Or <andrew@databricks.com> |
| 2015-02-24 11:08:07 -0800 |
| Commit: eaf7bf9, github.com/apache/spark/pull/4739 |
| |
| [Spark-5967] [UI] Correctly clean JobProgressListener.stageIdToActiveJobIds |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-24 11:02:47 -0800 |
| Commit: 28dd53b, github.com/apache/spark/pull/4741 |
| |
| [SPARK-5532][SQL] Repartition should not use external rdd representation |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 10:52:18 -0800 |
| Commit: e46096b, github.com/apache/spark/pull/4738 |
| |
| [SPARK-5910][SQL] Support for as in selectExpr |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-24 10:49:51 -0800 |
| Commit: ba5d60d, github.com/apache/spark/pull/4736 |
| |
| [SPARK-5968] [SQL] Suppresses ParquetOutputCommitter WARN logs |
| Cheng Lian <lian@databricks.com> |
| 2015-02-24 10:45:38 -0800 |
| Commit: 2b562b0, github.com/apache/spark/pull/4744 |
| |
| [SPARK-5958][MLLIB][DOC] update block matrix user guide |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-23 22:08:44 -0800 |
| Commit: dd42558, github.com/apache/spark/pull/4737 |
| |
| [SPARK-5873][SQL] Allow viewing of partially analyzed plans in queryExecution |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-23 17:34:54 -0800 |
| Commit: 2d7786e, github.com/apache/spark/pull/4684 |
| |
| [SPARK-5935][SQL] Accept MapType in the schema provided to a JSON dataset. |
| Yin Huai <yhuai@databricks.com>, Yin Huai <huai@cse.ohio-state.edu> |
| 2015-02-23 17:16:34 -0800 |
| Commit: 33ccad2, github.com/apache/spark/pull/4710 |
| |
| [SPARK-5912] [docs] [mllib] Small fixes to ChiSqSelector docs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-23 16:15:57 -0800 |
| Commit: ae97040, github.com/apache/spark/pull/4732 |
| |
| [MLLIB] SPARK-5912 Programming guide for feature selection |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-02-23 12:09:40 -0800 |
| Commit: 8355773, github.com/apache/spark/pull/4709 |
| |
| [SPARK-5939][MLLib] make FPGrowth example app take parameters |
| Jacky Li <jacky.likun@huawei.com> |
| 2015-02-23 08:47:28 -0800 |
| Commit: 33b9084, github.com/apache/spark/pull/4714 |
| |
| [SPARK-5943][Streaming] Update the test to use new API to reduce the warning |
| Saisai Shao <saisai.shao@intel.com> |
| 2015-02-23 11:27:27 +0000 |
| Commit: 67b7f79, github.com/apache/spark/pull/4722 |
| |
| [EXAMPLES] fix typo. |
| Makoto Fukuhara <fukuo33@gmail.com> |
| 2015-02-23 09:24:33 +0000 |
| Commit: f172387, github.com/apache/spark/pull/4724 |
| |
| Revert "[SPARK-4808] Removing minimum number of elements read before spill check" |
| Andrew Or <andrew@databricks.com> |
| 2015-02-22 09:44:52 -0800 |
| Commit: 4186dd3 |
| |
| SPARK-5669 [BUILD] Reverse exclusion of JBLAS libs for 1.3 |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-22 09:09:06 +0000 |
| Commit: eed7389, github.com/apache/spark/pull/4715 |
| |
| [DataFrame] [Typo] Fix the typo |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-22 08:56:30 +0000 |
| Commit: 04d3b32, github.com/apache/spark/pull/4717 |
| |
| [DOCS] Fix typo in API for custom InputFormats based on the ānewā MapReduce API |
| Alexander <abezzubov@nflabs.com> |
| 2015-02-22 08:53:05 +0000 |
| Commit: c5a5c6f, github.com/apache/spark/pull/4718 |
| |
| [SPARK-5937][YARN] Fix ClientSuite to set YARN mode, so that the correct class is used in t... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-21 10:01:01 -0800 |
| Commit: 76e3e65, github.com/apache/spark/pull/4711 |
| |
| SPARK-5841 [CORE] [HOTFIX 2] Memory leak in DiskBlockManager |
| Nishkam Ravi <nravi@cloudera.com>, nishkamravi2 <nishkamravi@gmail.com>, nravi <nravi@c1704.halxg.cloudera.com> |
| 2015-02-21 09:59:28 -0800 |
| Commit: 932338e, github.com/apache/spark/pull/4690 |
| |
| [SPARK-5909][SQL] Add a clearCache command to Spark SQL's cache manager |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-20 16:20:02 +0800 |
| Commit: b9a6c5c, github.com/apache/spark/pull/4694 |
| |
| [SPARK-5898] [SPARK-5896] [SQL] [PySpark] create DataFrame from pandas and tuple/list |
| Davies Liu <davies@databricks.com> |
| 2015-02-20 15:35:05 -0800 |
| Commit: 913562a, github.com/apache/spark/pull/4679 |
| |
| [SPARK-5867] [SPARK-5892] [doc] [ml] [mllib] Doc cleanups for 1.3 release |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-20 02:31:32 -0800 |
| Commit: 8c12f31, github.com/apache/spark/pull/4675 |
| |
| [SPARK-4808] Removing minimum number of elements read before spill check |
| mcheah <mcheah@palantir.com> |
| 2015-02-19 18:09:22 -0800 |
| Commit: 0382dcc, github.com/apache/spark/pull/4420 |
| |
| [SPARK-5900][MLLIB] make PIC and FPGrowth Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-19 18:06:16 -0800 |
| Commit: ba941ce, github.com/apache/spark/pull/4695 |
| |
| SPARK-5570: No docs stating that `new SparkConf().set("spark.driver.memory", ...) will not work |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-02-19 15:50:58 -0800 |
| Commit: c5f3b9e, github.com/apache/spark/pull/4665 |
| |
| SPARK-4682 [CORE] Consolidate various 'Clock' classes |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-19 15:35:23 -0800 |
| Commit: bd49e8b, github.com/apache/spark/pull/4514 |
| |
| [Spark-5889] Remove pid file after stopping service. |
| Zhan Zhang <zhazhan@gmail.com> |
| 2015-02-19 23:13:02 +0000 |
| Commit: ff8976e, github.com/apache/spark/pull/4676 |
| |
| [SPARK-5902] [ml] Made PipelineStage.transformSchema public instead of private to ml |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-19 12:46:27 -0800 |
| Commit: 0c494cf, github.com/apache/spark/pull/4682 |
| |
| [SPARK-5904][SQL] DataFrame API fixes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-19 12:09:44 -0800 |
| Commit: 55d91d9, github.com/apache/spark/pull/4686 |
| |
| [SPARK-5825] [Spark Submit] Remove the double checking instance name when stopping the service |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-19 12:07:51 -0800 |
| Commit: fe00eb6, github.com/apache/spark/pull/4611 |
| |
| [SPARK-5423][Core] Cleanup resources in DiskMapIterator.finalize to ensure deleting the temp file |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-19 18:37:31 +0000 |
| Commit: 25fae8e, github.com/apache/spark/pull/4219 |
| |
| [SPARK-5816] Add huge compatibility warning in DriverWrapper |
| Andrew Or <andrew@databricks.com> |
| 2015-02-19 09:56:25 -0800 |
| Commit: f93d4d9, github.com/apache/spark/pull/4687 |
| |
| SPARK-5548: Fix for AkkaUtilsSuite failure - attempt 2 |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-19 09:53:36 -0800 |
| Commit: fbcb949, github.com/apache/spark/pull/4653 |
| |
| [SPARK-5846] Correctly set job description and pool for SQL jobs |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-19 09:49:34 +0800 |
| Commit: 092b45f, github.com/apache/spark/pull/4630 |
| |
| [SPARK-5879][MLLIB] update PIC user guide and add a Java example |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-18 16:29:32 -0800 |
| Commit: a64f374, github.com/apache/spark/pull/4680 |
| |
| [SPARK-5722] [SQL] [PySpark] infer int as LongType |
| Davies Liu <davies@databricks.com> |
| 2015-02-18 14:17:04 -0800 |
| Commit: 470cba8, github.com/apache/spark/pull/4666 |
| |
| [SPARK-5840][SQL] HiveContext cannot be serialized due to tuple extraction |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-18 14:02:32 -0800 |
| Commit: b86e44c, github.com/apache/spark/pull/4628 |
| |
| [SPARK-5507] Added documentation for BlockMatrix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-18 10:11:08 -0800 |
| Commit: 56f8f29, github.com/apache/spark/pull/4664 |
| |
| [SPARK-5519][MLLIB] add user guide with example code for fp-growth |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-18 10:09:56 -0800 |
| Commit: 661fbd3, github.com/apache/spark/pull/4661 |
| |
| SPARK-5669 [BUILD] [HOTFIX] Spark assembly includes incompatibly licensed libgfortran, libgcc code via JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-18 14:41:44 +0000 |
| Commit: 9f256ce, github.com/apache/spark/pull/4673 |
| |
| SPARK-4610 addendum: [Minor] [MLlib] Minor doc fix in GBT classification example |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-18 10:13:28 +0000 |
| Commit: 3997e74, github.com/apache/spark/pull/4672 |
| |
| [SPARK-5878] fix DataFrame.repartition() in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-18 01:00:54 -0800 |
| Commit: aca7991, github.com/apache/spark/pull/4667 |
| |
| Avoid deprecation warnings in JDBCSuite. |
| Tor Myklebust <tmyklebu@gmail.com> |
| 2015-02-18 01:00:13 -0800 |
| Commit: 9a565b8, github.com/apache/spark/pull/4668 |
| |
| [Minor] [SQL] Cleans up DataFrame variable names and toDF() calls |
| Cheng Lian <lian@databricks.com> |
| 2015-02-17 23:36:20 -0800 |
| Commit: 2bd33ce, github.com/apache/spark/pull/4670 |
| |
| [SPARK-5731][Streaming][Test] Fix incorrect test in DirectKafkaStreamSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-17 22:44:16 -0800 |
| Commit: f8f9a64, github.com/apache/spark/pull/4597 |
| |
| [SPARK-5723][SQL]Change the default file format to Parquet for CTAS statements. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-17 18:14:33 -0800 |
| Commit: 6e82c46, github.com/apache/spark/pull/4639 |
| |
| Preparing development version 1.3.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-18 01:52:06 +0000 |
| Commit: 2ab0ba0 |
| |
| Preparing Spark release v1.3.0-rc1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-18 01:52:06 +0000 |
| Commit: f97b0d4 |
| |
| [SPARK-5875][SQL]logical.Project should not be resolved if it contains aggregates or generators |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-17 17:50:39 -0800 |
| Commit: e8284b2, github.com/apache/spark/pull/4663 |
| |
| Revert "Preparing Spark release v1.3.0-snapshot1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-17 17:48:47 -0800 |
| Commit: 7320605 |
| |
| Revert "Preparing development version 1.3.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-17 17:48:43 -0800 |
| Commit: 932ae4d |
| |
| [SPARK-4454] Revert getOrElse() cleanup in DAGScheduler.getCacheLocs() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 17:45:16 -0800 |
| Commit: 7e5e4d8 |
| |
| [SPARK-4454] Properly synchronize accesses to DAGScheduler cacheLocs map |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 17:39:58 -0800 |
| Commit: 07a401a, github.com/apache/spark/pull/4660 |
| |
| [SPARK-5811] Added documentation for maven coordinates and added Spark Packages support |
| Burak Yavuz <brkyvz@gmail.com>, Davies Liu <davies@databricks.com> |
| 2015-02-17 17:15:43 -0800 |
| Commit: cb90584, github.com/apache/spark/pull/4662 |
| |
| [SPARK-5785] [PySpark] narrow dependency for cogroup/join in PySpark |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 16:54:57 -0800 |
| Commit: 8120235, github.com/apache/spark/pull/4629 |
| |
| [SPARK-5852][SQL]Fail to convert a newly created empty metastore parquet table to a data source parquet table. |
| Yin Huai <yhuai@databricks.com>, Cheng Hao <hao.cheng@intel.com> |
| 2015-02-17 15:47:59 -0800 |
| Commit: 07d8ef9, github.com/apache/spark/pull/4655 |
| |
| [SPARK-5872] [SQL] create a sqlCtx in pyspark shell |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 15:44:37 -0800 |
| Commit: 0dba382, github.com/apache/spark/pull/4659 |
| |
| [SPARK-5871] output explain in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 13:48:38 -0800 |
| Commit: cb06160, github.com/apache/spark/pull/4658 |
| |
| [SPARK-4172] [PySpark] Progress API in Python |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 13:36:43 -0800 |
| Commit: 35e23ff, github.com/apache/spark/pull/3027 |
| |
| [SPARK-5868][SQL] Fix python UDFs in HiveContext and checks in SQLContext |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-17 13:23:45 -0800 |
| Commit: e65dc1f, github.com/apache/spark/pull/4657 |
| |
| [SQL] [Minor] Update the HiveContext Unittest |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-17 12:25:35 -0800 |
| Commit: 0135651, github.com/apache/spark/pull/4584 |
| |
| [Minor][SQL] Use same function to check path parameter in JSONRelation |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-17 12:24:13 -0800 |
| Commit: d74d5e8, github.com/apache/spark/pull/4649 |
| |
| [SPARK-5862][SQL] Only transformUp the given plan once in HiveMetastoreCatalog |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-17 12:23:18 -0800 |
| Commit: 62063b7, github.com/apache/spark/pull/4651 |
| |
| [Minor] fix typo in SQL document |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-02-17 12:16:52 -0800 |
| Commit: 5636c4a, github.com/apache/spark/pull/4656 |
| |
| [SPARK-5864] [PySpark] support .jar as python package |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 12:05:06 -0800 |
| Commit: 71cf6e2, github.com/apache/spark/pull/4652 |
| |
| SPARK-5841 [CORE] [HOTFIX] Memory leak in DiskBlockManager |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-17 19:40:06 +0000 |
| Commit: e64afcd, github.com/apache/spark/pull/4648 |
| |
| [SPARK-5661]function hasShutdownDeleteTachyonDir should use shutdownDeleteTachyonPaths to determine whether contains file |
| xukun 00228947 <xukun.xu@huawei.com>, viper-kun <xukun.xu@huawei.com> |
| 2015-02-17 18:59:41 +0000 |
| Commit: 420bc9b, github.com/apache/spark/pull/4418 |
| |
| [SPARK-5778] throw if nonexistent metrics config file provided |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-17 10:57:16 -0800 |
| Commit: 2bf2b56, github.com/apache/spark/pull/4571 |
| |
| [SPARK-5859] [PySpark] [SQL] fix DataFrame Python API |
| Davies Liu <davies@databricks.com> |
| 2015-02-17 10:22:48 -0800 |
| Commit: 4a581aa, github.com/apache/spark/pull/4645 |
| |
| [SPARK-5166][SPARK-5247][SPARK-5258][SQL] API Cleanup / Documentation |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-17 10:21:17 -0800 |
| Commit: cd3d415, github.com/apache/spark/pull/4642 |
| |
| [SPARK-5858][MLLIB] Remove unnecessary first() call in GLM |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-17 10:17:45 -0800 |
| Commit: 97cb568, github.com/apache/spark/pull/4647 |
| |
| SPARK-5856: In Maven build script, launch Zinc with more memory |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-17 10:10:01 -0800 |
| Commit: 8240629, github.com/apache/spark/pull/4643 |
| |
| Revert "[SPARK-5363] [PySpark] check ending mark in non-block way" |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-17 07:48:27 -0800 |
| Commit: aeb85cd |
| |
| [SPARK-5826][Streaming] Fix Configuration not serializable problem |
| jerryshao <saisai.shao@intel.com> |
| 2015-02-17 10:45:18 +0000 |
| Commit: b8da5c3, github.com/apache/spark/pull/4612 |
| |
| HOTFIX: Style issue causing build break |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 22:10:39 -0800 |
| Commit: e9241fa |
| |
| [SPARK-5802][MLLIB] cache transformed data in glm |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-16 22:09:04 -0800 |
| Commit: dfe0fa0, github.com/apache/spark/pull/4593 |
| |
| [SPARK-5853][SQL] Schema support in Row. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-16 20:42:57 -0800 |
| Commit: d0701d9, github.com/apache/spark/pull/4640 |
| |
| SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 20:33:33 -0800 |
| Commit: c6a7069, github.com/apache/spark/pull/4638 |
| |
| [SPARK-5363] [PySpark] check ending mark in non-block way |
| Davies Liu <davies@databricks.com> |
| 2015-02-16 20:32:03 -0800 |
| Commit: baad6b3, github.com/apache/spark/pull/4601 |
| |
| [SQL] Various DataFrame doc changes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-16 19:00:30 -0800 |
| Commit: e355b54, github.com/apache/spark/pull/4636 |
| |
| [SPARK-5849] Handle more types of invalid JSON requests in SubmitRestProtocolMessage.parseAction |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-16 18:08:02 -0800 |
| Commit: 385a339, github.com/apache/spark/pull/4637 |
| |
| [SPARK-3340] Deprecate ADD_JARS and ADD_FILES |
| azagrebin <azagrebin@gmail.com> |
| 2015-02-16 18:06:19 -0800 |
| Commit: d8c70fb, github.com/apache/spark/pull/4616 |
| |
| [SPARK-5788] [PySpark] capture the exception in python write thread |
| Davies Liu <davies@databricks.com> |
| 2015-02-16 17:57:14 -0800 |
| Commit: c2a9a61, github.com/apache/spark/pull/4577 |
| |
| SPARK-5848: tear down the ConsoleProgressBar timer |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-17 00:59:49 +0000 |
| Commit: 52994d8, github.com/apache/spark/pull/4635 |
| |
| [SPARK-4865][SQL]Include temporary tables in SHOW TABLES |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:59:23 -0800 |
| Commit: 8a94bf7, github.com/apache/spark/pull/4618 |
| |
| [SQL] Optimize arithmetic and predicate operators |
| kai <kaizeng@eecs.berkeley.edu> |
| 2015-02-16 15:58:05 -0800 |
| Commit: 639a3c2, github.com/apache/spark/pull/4472 |
| |
| [SPARK-5839][SQL]HiveMetastoreCatalog does not recognize table names and aliases of data source tables. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:54:01 -0800 |
| Commit: a15a0a0, github.com/apache/spark/pull/4626 |
| |
| [SPARK-5746][SQL] Check invalid cases for the write path of data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-16 15:51:59 -0800 |
| Commit: 4198654, github.com/apache/spark/pull/4617 |
| |
| HOTFIX: Break in Jekyll build from #4589 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-16 15:43:56 -0800 |
| Commit: ad8fd4f |
| |
| [SPARK-2313] Use socket to communicate GatewayServer port back to Python driver |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-16 15:25:11 -0800 |
| Commit: b70b8ba, github.com/apache/spark/pull/3424. |
| |
| SPARK-5357: Update commons-codec version to 1.10 (current) |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-16 23:05:34 +0000 |
| Commit: 8c45619, github.com/apache/spark/pull/4153 |
| |
| SPARK-5841: remove DiskBlockManager shutdown hook on stop |
| Matt Whelan <mwhelan@perka.com> |
| 2015-02-16 22:54:32 +0000 |
| Commit: dd977df, github.com/apache/spark/pull/4627 |
| |
| [SPARK-5833] [SQL] Adds REFRESH TABLE command |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 12:52:05 -0800 |
| Commit: 864d77e, github.com/apache/spark/pull/4624 |
| |
| [SPARK-5296] [SQL] Add more filter types for data sources API |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 12:48:55 -0800 |
| Commit: 363a9a7, github.com/apache/spark/pull/4623 |
| |
| [SQL] Add fetched row count in SparkSQLCLIDriver |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-16 12:34:09 -0800 |
| Commit: 0368494, github.com/apache/spark/pull/4604 |
| |
| [SQL] Initial support for reporting location of error in sql string |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-16 12:32:56 -0800 |
| Commit: 63fa123, github.com/apache/spark/pull/4587 |
| |
| [SPARK-5824] [SQL] add null format in ctas and set default col comment to null |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-16 12:31:36 -0800 |
| Commit: c2eaaea, github.com/apache/spark/pull/4609 |
| |
| [SQL] [Minor] Update the SpecificMutableRow.copy |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-16 12:21:08 -0800 |
| Commit: 1a88955, github.com/apache/spark/pull/4619 |
| |
| SPARK-5795 [STREAMING] api.java.JavaPairDStream.saveAsNewAPIHadoopFiles may not friendly to java |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-16 19:32:31 +0000 |
| Commit: fef2267, github.com/apache/spark/pull/4608 |
| |
| [SPARK-5799][SQL] Compute aggregation function on specified numeric columns |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-16 10:06:11 -0800 |
| Commit: 0165e9d, github.com/apache/spark/pull/4592 |
| |
| [SPARK-4553] [SPARK-5767] [SQL] Wires Parquet data source with the newly introduced write support for data source API |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 01:38:31 -0800 |
| Commit: 78f7edb, github.com/apache/spark/pull/4563 |
| |
| [Minor] [SQL] Renames stringRddToDataFrame to stringRddToDataFrameHolder for consistency |
| Cheng Lian <lian@databricks.com> |
| 2015-02-16 01:33:37 -0800 |
| Commit: 066301c, github.com/apache/spark/pull/4613 |
| |
| [Ml] SPARK-5804 Explicitly manage cache in Crossvalidator k-fold loop |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-02-16 00:07:23 -0800 |
| Commit: 0d93205, github.com/apache/spark/pull/4595 |
| |
| [Ml] SPARK-5796 Don't transform data on a last estimator in Pipeline |
| Peter Rudenko <petro.rudenko@gmail.com> |
| 2015-02-15 20:51:32 -0800 |
| Commit: 9cf7d70, github.com/apache/spark/pull/4590 |
| |
| SPARK-5815 [MLLIB] Deprecate SVDPlusPlus APIs that expose DoubleMatrix from JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-15 20:41:27 -0800 |
| Commit: db3c539, github.com/apache/spark/pull/4614 |
| |
| [SPARK-5769] Set params in constructors and in setParams in Python ML pipelines |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-15 20:29:26 -0800 |
| Commit: d710991, github.com/apache/spark/pull/4564 |
| |
| SPARK-5669 [BUILD] Spark assembly includes incompatibly licensed libgfortran, libgcc code via JBLAS |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-15 09:15:48 -0800 |
| Commit: 4e099d7, github.com/apache/spark/pull/4453 |
| |
| [MLLIB][SPARK-5502] User guide for isotonic regression |
| martinzapletal <zapletal-martin@email.cz> |
| 2015-02-15 09:10:03 -0800 |
| Commit: d96e188, github.com/apache/spark/pull/4536 |
| |
| [HOTFIX] Ignore DirectKafkaStreamSuite. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-13 12:43:53 -0800 |
| Commit: 70ebad4 |
| |
| [SPARK-5827][SQL] Add missing import in the example of SqlContext |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2015-02-15 14:42:20 +0000 |
| Commit: 9c1c70d, github.com/apache/spark/pull/4615 |
| |
| SPARK-5822 [BUILD] cannot import src/main/scala & src/test/scala into eclipse as source folder |
| gli <gli@redhat.com> |
| 2015-02-14 20:43:27 +0000 |
| Commit: f87f3b7, github.com/apache/spark/pull/4531 |
| |
| Revise formatting of previous commit f80e2629bb74bc62960c61ff313f7e7802d61319 |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-14 20:12:29 +0000 |
| Commit: 1945fcf |
| |
| [SPARK-5800] Streaming Docs. Change linked files according the selected language |
| gasparms <gmunoz@stratio.com> |
| 2015-02-14 20:10:29 +0000 |
| Commit: e99e170, github.com/apache/spark/pull/4589 |
| |
| [SPARK-5752][SQL] Don't implicitly convert RDDs directly to DataFrames |
| Reynold Xin <rxin@databricks.com>, Davies Liu <davies@databricks.com> |
| 2015-02-13 23:03:22 -0800 |
| Commit: ba91bf5, github.com/apache/spark/pull/4556 |
| |
| SPARK-3290 [GRAPHX] No unpersist callls in SVDPlusPlus |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-13 20:12:52 -0800 |
| Commit: db57479, github.com/apache/spark/pull/4234 |
| |
| [SPARK-5227] [SPARK-5679] Disable FileSystem cache in WholeTextFileRecordReaderSuite |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-13 17:45:31 -0800 |
| Commit: 152147f, github.com/apache/spark/pull/4599 |
| |
| [SPARK-5730][ML] add doc groups to spark.ml components |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 16:45:59 -0800 |
| Commit: fccd38d, github.com/apache/spark/pull/4600 |
| |
| [SPARK-5803][MLLIB] use ArrayBuilder to build primitive arrays |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 16:43:49 -0800 |
| Commit: 356b798, github.com/apache/spark/pull/4594 |
| |
| [SPARK-5806] re-organize sections in mllib-clustering.md |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-13 15:09:27 -0800 |
| Commit: 9658763, github.com/apache/spark/pull/4598 |
| |
| [SPARK-5789][SQL]Throw a better error message if JsonRDD.parseJson encounters unrecoverable parsing errors. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-13 13:51:06 -0800 |
| Commit: d9d0250, github.com/apache/spark/pull/4582 |
| |
| [SPARK-5642] [SQL] Apply column pruning on unused aggregation fields |
| Daoyuan Wang <daoyuan.wang@intel.com>, Michael Armbrust <michael@databricks.com> |
| 2015-02-13 13:46:50 -0800 |
| Commit: efffc2e, github.com/apache/spark/pull/4415 |
| |
| [HOTFIX] Fix build break in MesosSchedulerBackendSuite |
| Andrew Or <andrew@databricks.com> |
| 2015-02-13 13:10:29 -0800 |
| Commit: 4160371 |
| |
| SPARK-5805 Fixed the type error in documentation. |
| Emre SevinƧ <emre.sevinc@gmail.com> |
| 2015-02-13 12:31:27 -0800 |
| Commit: ad73189, github.com/apache/spark/pull/4596 |
| |
| [SPARK-5735] Replace uses of EasyMock with Mockito |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-13 09:53:57 -0800 |
| Commit: cc9eec1, github.com/apache/spark/pull/4578 |
| |
| [SPARK-5783] Better eventlog-parsing error messages |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-13 09:47:26 -0800 |
| Commit: e5690a5, github.com/apache/spark/pull/4573 |
| |
| [SPARK-5503][MLLIB] Example code for Power Iteration Clustering |
| sboeschhuawei <stephen.boesch@huawei.com> |
| 2015-02-13 09:45:57 -0800 |
| Commit: 5e63942, github.com/apache/spark/pull/4495 |
| |
| [SPARK-5732][CORE]:Add an option to print the spark version in spark script. |
| uncleGen <hustyugm@gmail.com>, genmao.ygm <genmao.ygm@alibaba-inc.com> |
| 2015-02-13 09:43:10 -0800 |
| Commit: 5c883df, github.com/apache/spark/pull/4522 |
| |
| [SPARK-4832][Deploy]some other processes might take the daemon pid |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-02-13 10:27:23 +0000 |
| Commit: 1255e83, github.com/apache/spark/pull/3683 |
| |
| [SQL] Fix docs of SQLContext.tables |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 20:37:55 -0800 |
| Commit: a8f560c, github.com/apache/spark/pull/4579 |
| |
| [SPARK-3365][SQL]Wrong schema generated for List type |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-02-12 22:18:39 -0800 |
| Commit: b9f332a, github.com/apache/spark/pull/4581 |
| |
| [SPARK-3299][SQL]Public API in SQLContext to list tables |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 18:08:01 -0800 |
| Commit: edbac17, github.com/apache/spark/pull/4547 |
| |
| [SQL] Move SaveMode to SQL package. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 15:32:17 -0800 |
| Commit: 925fd84, github.com/apache/spark/pull/4542 |
| |
| [SPARK-5335] Fix deletion of security groups within a VPC |
| Vladimir Grigor <vladimir@kiosked.com>, Vladimir Grigor <vladimir@voukka.com> |
| 2015-02-12 23:26:24 +0000 |
| Commit: 5c9db4e, github.com/apache/spark/pull/4122 |
| |
| [SPARK-5755] [SQL] remove unnecessary Add |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-12 15:22:07 -0800 |
| Commit: f7103b3, github.com/apache/spark/pull/4551 |
| |
| [SPARK-5573][SQL] Add explode to dataframes |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-12 15:19:19 -0800 |
| Commit: c7eb9ee, github.com/apache/spark/pull/4546 |
| |
| [SPARK-5758][SQL] Use LongType as the default type for integers in JSON schema inference. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-12 15:17:25 -0800 |
| Commit: b0c79da, github.com/apache/spark/pull/4544 |
| |
| [SPARK-5780] [PySpark] Mute the logging during unit tests |
| Davies Liu <davies@databricks.com> |
| 2015-02-12 14:54:38 -0800 |
| Commit: bf0d15c, github.com/apache/spark/pull/4572 |
| |
| SPARK-5747: Fix wordsplitting bugs in make-distribution.sh |
| David Y. Ross <dyross@gmail.com> |
| 2015-02-12 14:52:38 -0800 |
| Commit: 11a0d5b, github.com/apache/spark/pull/4540 |
| |
| [SPARK-5759][Yarn]ExecutorRunnable should catch YarnException while NMClient start contain... |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-12 14:50:16 -0800 |
| Commit: 02d5b32, github.com/apache/spark/pull/4554 |
| |
| [SPARK-5760][SPARK-5761] Fix standalone rest protocol corner cases + revamp tests |
| Andrew Or <andrew@databricks.com> |
| 2015-02-12 14:47:52 -0800 |
| Commit: 11d1080, github.com/apache/spark/pull/4557 |
| |
| [SPARK-5762] Fix shuffle write time for sort-based shuffle |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-12 14:46:37 -0800 |
| Commit: 0040fc5, github.com/apache/spark/pull/4559 |
| |
| [SPARK-5765][Examples]Fixed word split problem in run-example and compute-classpath |
| Venkata Ramana G <ramana.gollamudihuawei.com>, Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-02-12 14:44:21 -0800 |
| Commit: 9a1de4b, github.com/apache/spark/pull/4561 |
| |
| [SPARK-5645] Added local read bytes/time to task metrics |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-12 14:35:44 -0800 |
| Commit: 74f34bb, github.com/apache/spark/pull/4510 |
| |
| [SQL] Improve error messages |
| Michael Armbrust <michael@databricks.com>, wangfei <wangfei1@huawei.com> |
| 2015-02-12 13:11:28 -0800 |
| Commit: e3a975d, github.com/apache/spark/pull/4558 |
| |
| [SQL][DOCS] Update sql documentation |
| Antonio Navarro Perez <ajnavarro@users.noreply.github.com> |
| 2015-02-12 12:46:17 -0800 |
| Commit: cbd659e, github.com/apache/spark/pull/4560 |
| |
| [SPARK-5757][MLLIB] replace SQL JSON usage in model import/export by json4s |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-12 10:48:13 -0800 |
| Commit: e26c149, github.com/apache/spark/pull/4555 |
| |
| [SPARK-5655] Don't chmod700 application files if running in YARN |
| Andrew Rowson <github@growse.com> |
| 2015-02-12 18:41:39 +0000 |
| Commit: e23c8f5, github.com/apache/spark/pull/4509 |
| |
| [SQL] Make dataframe more tolerant of being serialized |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-11 19:05:49 -0800 |
| Commit: 3c1b9bf, github.com/apache/spark/pull/4545 |
| |
| [SQL] Two DataFrame fixes. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-11 18:32:48 -0800 |
| Commit: bcb1382, github.com/apache/spark/pull/4543 |
| |
| [SPARK-3688][SQL] More inline comments for LogicalPlan. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-11 15:26:31 -0800 |
| Commit: 08ab3d2, github.com/apache/spark/pull/4539 |
| |
| [SPARK-3688][SQL]LogicalPlan can't resolve column correctlly |
| tianyi <tianyi.asiainfo@gmail.com> |
| 2015-02-11 12:50:17 -0800 |
| Commit: e136f47, github.com/apache/spark/pull/4524 |
| |
| [SPARK-5454] More robust handling of self joins |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-11 12:31:56 -0800 |
| Commit: 1bb3631, github.com/apache/spark/pull/4520 |
| |
| Remove outdated remark about take(n). |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2015-02-11 20:24:17 +0000 |
| Commit: 72adfc5, github.com/apache/spark/pull/4533 |
| |
| [SPARK-5677] [SPARK-5734] [SQL] [PySpark] Python DataFrame API remaining tasks |
| Davies Liu <davies@databricks.com> |
| 2015-02-11 12:13:16 -0800 |
| Commit: d66aae2, github.com/apache/spark/pull/4528 |
| |
| [SPARK-5733] Error Link in Pagination of HistroyPage when showing Incomplete Applications |
| guliangliang <guliangliang@qiyi.com> |
| 2015-02-11 15:55:49 +0000 |
| Commit: 864dccd, github.com/apache/spark/pull/4523 |
| |
| SPARK-5727 [BUILD] Deprecate Debian packaging |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-11 08:30:16 +0000 |
| Commit: 057ec4f, github.com/apache/spark/pull/4516 |
| |
| SPARK-5728 [STREAMING] MQTTStreamSuite leaves behind ActiveMQ database files |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-11 08:13:51 +0000 |
| Commit: 476b6d7, github.com/apache/spark/pull/4517 |
| |
| [SPARK-4964] [Streaming] refactor createRDD to take leaders via map instead of array |
| cody koeninger <cody@koeninger.org> |
| 2015-02-11 00:13:27 -0800 |
| Commit: 811d179, github.com/apache/spark/pull/4511 |
| |
| Preparing development version 1.3.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 07:47:03 +0000 |
| Commit: e57c81b |
| |
| Preparing Spark release v1.3.0-snapshot1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 07:47:02 +0000 |
| Commit: d97bfc6 |
| |
| Revert "Preparing Spark release v1.3.0-snapshot1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 23:46:04 -0800 |
| Commit: 6a91d59 |
| |
| Revert "Preparing development version 1.3.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 23:46:02 -0800 |
| Commit: 3a50383 |
| |
| HOTFIX: Adding Junit to Hive tests for Maven build |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 23:39:21 -0800 |
| Commit: 0386fc4 |
| |
| Preparing development version 1.3.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 06:45:03 +0000 |
| Commit: ba12b79 |
| |
| Preparing Spark release v1.3.0-snapshot1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 06:45:03 +0000 |
| Commit: 53068f5 |
| |
| HOTFIX: Java 6 compilation error in Spark SQL |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 22:43:32 -0800 |
| Commit: 15180bc |
| |
| Revert "Preparing Spark release v1.3.0-snapshot1" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 22:44:10 -0800 |
| Commit: 536dae9 |
| |
| Revert "Preparing development version 1.3.1-SNAPSHOT" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 22:44:07 -0800 |
| Commit: 01b562e |
| |
| Preparing development version 1.3.1-SNAPSHOT |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 06:15:29 +0000 |
| Commit: db80d0f |
| |
| Preparing Spark release v1.3.0-snapshot1 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-11 06:15:29 +0000 |
| Commit: c2e4001 |
| |
| Updating versions for Spark 1.3 |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-10 21:54:55 -0800 |
| Commit: 2f52489 |
| |
| [SPARK-5714][Mllib] Refactor initial step of LDA to remove redundant operations |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-10 21:51:15 -0800 |
| Commit: ba3aa8f, github.com/apache/spark/pull/4501 |
| |
| [SPARK-5702][SQL] Allow short names for built-in data sources. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-10 20:40:21 -0800 |
| Commit: 63af90c, github.com/apache/spark/pull/4489 |
| |
| [SPARK-5729] Potential NPE in standalone REST API |
| Andrew Or <andrew@databricks.com> |
| 2015-02-10 20:19:14 -0800 |
| Commit: 1bc75b0, github.com/apache/spark/pull/4518 |
| |
| [SPARK-4879] Use driver to coordinate Hadoop output committing for speculative tasks |
| mcheah <mcheah@palantir.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-02-10 20:12:18 -0800 |
| Commit: 79cd59c, github.com/apache/spark/pull/4155. |
| |
| [SQL][DataFrame] Fix column computability bug. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-10 19:50:44 -0800 |
| Commit: e477e91, github.com/apache/spark/pull/4519 |
| |
| [SPARK-5709] [SQL] Add EXPLAIN support in DataFrame API for debugging purpose |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-10 19:40:51 -0800 |
| Commit: 7fa0d5f, github.com/apache/spark/pull/4496 |
| |
| [SPARK-5704] [SQL] [PySpark] createDataFrame from RDD with columns |
| Davies Liu <davies@databricks.com> |
| 2015-02-10 19:40:12 -0800 |
| Commit: 1056c5b, github.com/apache/spark/pull/4498 |
| |
| [SPARK-5683] [SQL] Avoid multiple json generator created |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-10 18:19:56 -0800 |
| Commit: fc0446f, github.com/apache/spark/pull/4468 |
| |
| [SQL] Add an exception for analysis errors. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-10 17:32:42 -0800 |
| Commit: 748cdc1, github.com/apache/spark/pull/4439 |
| |
| [SPARK-5658][SQL] Finalize DDL and write support APIs |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-10 17:29:52 -0800 |
| Commit: a21090e, github.com/apache/spark/pull/4446 |
| |
| [SPARK-5493] [core] Add option to impersonate user. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-10 17:19:10 -0800 |
| Commit: 8e75b0e, github.com/apache/spark/pull/4405 |
| |
| [SQL] Make Options in the data source API CREATE TABLE statements optional. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-10 17:06:12 -0800 |
| Commit: 445dbc7, github.com/apache/spark/pull/4515 |
| |
| [SPARK-5725] [SQL] Fixes ParquetRelation2.equals |
| Cheng Lian <lian@databricks.com> |
| 2015-02-10 17:02:44 -0800 |
| Commit: f43bc3d, github.com/apache/spark/pull/4513 |
| |
| [SPARK-5343][GraphX]: ShortestPaths traverses backwards |
| Brennon York <brennon.york@capitalone.com> |
| 2015-02-10 14:57:00 -0800 |
| Commit: 5be8902, github.com/apache/spark/pull/4478 |
| |
| [SPARK-5021] [MLlib] Gaussian Mixture now supports Sparse Input |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-02-10 14:05:55 -0800 |
| Commit: bba0953, github.com/apache/spark/pull/4459 |
| |
| [HOTFIX][SPARK-4136] Fix compilation and tests |
| Andrew Or <andrew@databricks.com> |
| 2015-02-10 11:18:01 -0800 |
| Commit: 4e3aa68 |
| |
| [SPARK-5686][SQL] Add show current roles command in HiveQl |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-10 13:20:15 -0800 |
| Commit: 8b7587a, github.com/apache/spark/pull/4471 |
| |
| [SQL] Add toString to DataFrame/Column |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-10 13:14:01 -0800 |
| Commit: ef739d9, github.com/apache/spark/pull/4436 |
| |
| SPARK-5613: Catch the ApplicationNotFoundException exception to avoid thread from getting killed on yarn restart. |
| Kashish Jain <kashish.jain@guavus.com> |
| 2015-02-06 13:47:23 -0800 |
| Commit: c294216, github.com/apache/spark/pull/4392 |
| |
| [SPARK-5592][SQL] java.net.URISyntaxException when insert data to a partitioned table |
| wangfei <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-02-10 11:54:30 -0800 |
| Commit: dbfce30, github.com/apache/spark/pull/4368 |
| |
| SPARK-4136. Under dynamic allocation, cancel outstanding executor requests when no longer needed |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-10 11:07:25 -0800 |
| Commit: e53da21, github.com/apache/spark/pull/4168 |
| |
| [SPARK-5716] [SQL] Support TOK_CHARSETLITERAL in HiveQl |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-10 11:08:21 -0800 |
| Commit: e508237, github.com/apache/spark/pull/4502 |
| |
| [Spark-5717] [MLlib] add stop and reorganize import |
| JqueryFan <firing@126.com>, Yuhao Yang <hhbyyh@gmail.com> |
| 2015-02-10 17:37:32 +0000 |
| Commit: b32f553, github.com/apache/spark/pull/4503 |
| |
| [SPARK-5700] [SQL] [Build] Bumps jets3t to 0.9.3 for hadoop-2.3 and hadoop-2.4 profiles |
| Cheng Lian <lian@databricks.com> |
| 2015-02-10 02:28:47 -0800 |
| Commit: d6f31e0, github.com/apache/spark/pull/4499 |
| |
| SPARK-5239 [CORE] JdbcRDD throws "java.lang.AbstractMethodError: oracle.jdbc.driver.xxxxxx.isClosed()Z" |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-10 09:19:01 +0000 |
| Commit: 4cfc025, github.com/apache/spark/pull/4470 |
| |
| [SPARK-4964][Streaming][Kafka] More updates to Exactly-once Kafka stream |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-09 22:45:48 -0800 |
| Commit: 281614d, github.com/apache/spark/pull/4384 |
| |
| [SPARK-5597][MLLIB] save/load for decision trees and emsembles |
| Joseph K. Bradley <joseph@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-09 22:09:07 -0800 |
| Commit: 01905c4, github.com/apache/spark/pull/4444. |
| |
| [SQL] Remove the duplicated code |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-09 21:33:34 -0800 |
| Commit: 663d34e, github.com/apache/spark/pull/4494 |
| |
| [SPARK-5701] Only set ShuffleReadMetrics when task has shuffle deps |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-02-09 21:22:09 -0800 |
| Commit: 6ddbca4, github.com/apache/spark/pull/4488 |
| |
| [SPARK-5703] AllJobsPage throws empty.max exception |
| Andrew Or <andrew@databricks.com> |
| 2015-02-09 21:18:48 -0800 |
| Commit: 8326255, github.com/apache/spark/pull/4490 |
| |
| [SPARK-2996] Implement userClassPathFirst for driver, yarn. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-09 21:17:06 -0800 |
| Commit: 6a1e0f9, github.com/apache/spark/pull/3233 |
| |
| SPARK-4900 [MLLIB] MLlib SingularValueDecomposition ARPACK IllegalStateException |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-09 21:13:58 -0800 |
| Commit: ebf1df0, github.com/apache/spark/pull/4485 |
| |
| Add a config option to print DAG. |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2015-02-09 20:58:58 -0800 |
| Commit: dad05e0, github.com/apache/spark/pull/4257 |
| |
| [SPARK-5469] restructure pyspark.sql into multiple files |
| Davies Liu <davies@databricks.com> |
| 2015-02-09 20:49:22 -0800 |
| Commit: f0562b4, github.com/apache/spark/pull/4479 |
| |
| [SPARK-5698] Do not let user request negative # of executors |
| Andrew Or <andrew@databricks.com> |
| 2015-02-09 17:33:29 -0800 |
| Commit: 62b1e1f, github.com/apache/spark/pull/4483 |
| |
| [SPARK-5699] [SQL] [Tests] Runs hive-thriftserver tests whenever SQL code is modified |
| Cheng Lian <lian@databricks.com> |
| 2015-02-09 16:52:05 -0800 |
| Commit: 71f0f51, github.com/apache/spark/pull/4486 |
| |
| [SPARK-5648][SQL] support "alter ... unset tblproperties("key")" |
| DoingDone9 <799203320@qq.com> |
| 2015-02-09 16:40:26 -0800 |
| Commit: e2bf59a, github.com/apache/spark/pull/4424 |
| |
| [SPARK-2096][SQL] support dot notation on array of struct |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-02-09 16:39:34 -0800 |
| Commit: 15f557f, github.com/apache/spark/pull/2405 |
| |
| [SPARK-5614][SQL] Predicate pushdown through Generate. |
| Lu Yan <luyan02@baidu.com> |
| 2015-02-09 16:25:38 -0800 |
| Commit: ce2c89c, github.com/apache/spark/pull/4394 |
| |
| [SPARK-5696] [SQL] [HOTFIX] Asks HiveThriftServer2 to re-initialize log4j using Hive configurations |
| Cheng Lian <lian@databricks.com> |
| 2015-02-09 16:23:12 -0800 |
| Commit: 379233c, github.com/apache/spark/pull/4484 |
| |
| [SQL] Code cleanup. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-09 16:20:42 -0800 |
| Commit: e241601, github.com/apache/spark/pull/4482 |
| |
| [SQL] Add some missing DataFrame functions. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-09 16:02:56 -0800 |
| Commit: a70dca0, github.com/apache/spark/pull/4437 |
| |
| [SPARK-5675][SQL] XyzType companion object should subclass XyzType |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-09 14:51:46 -0800 |
| Commit: 1e2fab2, github.com/apache/spark/pull/4463 |
| |
| [SPARK-4905][STREAMING] FlumeStreamSuite fix. |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-09 14:17:14 -0800 |
| Commit: 18c5a99, github.com/apache/spark/pull/4371 |
| |
| [SPARK-5691] Fixing wrong data structure lookup for dupe app registratio... |
| mcheah <mcheah@palantir.com> |
| 2015-02-09 13:20:14 -0800 |
| Commit: 6a0144c, github.com/apache/spark/pull/4477 |
| |
| [SPARK-5678] Convert DataFrame to pandas.DataFrame and Series |
| Davies Liu <davies@databricks.com> |
| 2015-02-09 11:42:52 -0800 |
| Commit: 43972b5, github.com/apache/spark/pull/4476 |
| |
| [SPARK-5664][BUILD] Restore stty settings when exiting from SBT's spark-shell |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-09 11:45:12 -0800 |
| Commit: fa67877, github.com/apache/spark/pull/4451 |
| |
| SPARK-4267 [YARN] Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-09 10:33:57 -0800 |
| Commit: c88d4ab, github.com/apache/spark/pull/4452 |
| |
| [SPARK-5473] [EC2] Expose SSH failures after status checks pass |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-09 09:44:53 +0000 |
| Commit: f2aa7b7, github.com/apache/spark/pull/4262 |
| |
| [SPARK-5539][MLLIB] LDA guide |
| Xiangrui Meng <meng@databricks.com>, Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-08 23:40:36 -0800 |
| Commit: 5782ee2, github.com/apache/spark/pull/4465 |
| |
| [SPARK-5472][SQL] Fix Scala code style |
| Hung Lin <hung@zoomdata.com> |
| 2015-02-08 22:36:42 -0800 |
| Commit: 955f286, github.com/apache/spark/pull/4464 |
| |
| SPARK-4405 [MLLIB] Matrices.* construction methods should check for rows x cols overflow |
| Sean Owen <sowen@cloudera.com> |
| 2015-02-08 21:08:50 -0800 |
| Commit: fa8ea48, github.com/apache/spark/pull/4461 |
| |
| [SPARK-5660][MLLIB] Make Matrix apply public |
| Joseph K. Bradley <joseph@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-08 21:07:36 -0800 |
| Commit: df9b105, github.com/apache/spark/pull/4447 |
| |
| [SPARK-5643][SQL] Add a show method to print the content of a DataFrame in tabular format. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-08 18:56:51 -0800 |
| Commit: e1996aa, github.com/apache/spark/pull/4416 |
| |
| SPARK-5665 [DOCS] Update netlib-java documentation |
| Sam Halliday <sam.halliday@Gmail.com>, Sam Halliday <sam.halliday@gmail.com> |
| 2015-02-08 16:34:26 -0800 |
| Commit: c515634, github.com/apache/spark/pull/4448 |
| |
| [SPARK-5598][MLLIB] model save/load for ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-08 16:26:20 -0800 |
| Commit: 9e4d58f, github.com/apache/spark/pull/4422 |
| |
| [SQL] Set sessionState in QueryExecution. |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-08 14:55:07 -0800 |
| Commit: 42c56b6, github.com/apache/spark/pull/4445 |
| |
| [SPARK-3039] [BUILD] Spark assembly for new hadoop API (hadoop 2) contai... |
| medale <medale94@yahoo.com> |
| 2015-02-08 10:35:29 +0000 |
| Commit: bc55e20, github.com/apache/spark/pull/4315 |
| |
| [SPARK-5672][Web UI] Don't return `ERROR 500` when have missing args |
| Kirill A. Korinskiy <catap@catap.ru> |
| 2015-02-08 10:31:46 +0000 |
| Commit: 96010fa, github.com/apache/spark/pull/4239 |
| |
| [SPARK-5671] Upgrade jets3t to 0.9.2 in hadoop-2.3 and 2.4 profiles |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-07 17:19:08 -0800 |
| Commit: 0f9d765, github.com/apache/spark/pull/4454 |
| |
| [SPARK-5108][BUILD] Jackson dependency management for Hadoop-2.6.0 support |
| Zhan Zhang <zhazhan@gmail.com> |
| 2015-02-07 19:41:30 +0000 |
| Commit: 51fbca4, github.com/apache/spark/pull/3938 |
| |
| [BUILD] Add the ability to launch spark-shell from SBT. |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-07 00:14:38 -0800 |
| Commit: 6bda169, github.com/apache/spark/pull/4438 |
| |
| [SPARK-5388] Provide a stable application submission gateway for standalone cluster mode |
| Andrew Or <andrew@databricks.com> |
| 2015-02-06 15:57:06 -0800 |
| Commit: 6ec0cdc, github.com/apache/spark/pull/4216 |
| |
| SPARK-5403: Ignore UserKnownHostsFile in SSH calls |
| Grzegorz Dubicki <grzegorz.dubicki@gmail.com> |
| 2015-02-06 15:43:58 -0800 |
| Commit: 3d99741, github.com/apache/spark/pull/4196 |
| |
| [SPARK-5601][MLLIB] make streaming linear algorithms Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-06 15:42:59 -0800 |
| Commit: 11b28b9, github.com/apache/spark/pull/4432 |
| |
| [SQL] [Minor] HiveParquetSuite was disabled by mistake, re-enable them |
| Cheng Lian <lian@databricks.com> |
| 2015-02-06 15:23:42 -0800 |
| Commit: 4005802, github.com/apache/spark/pull/4440 |
| |
| [SQL] Use TestSQLContext in Java tests |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-06 15:11:02 -0800 |
| Commit: c950058, github.com/apache/spark/pull/4441 |
| |
| [SPARK-4994][network]Cleanup removed executors' ShuffleInfo in yarn shuffle service |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 14:47:52 -0800 |
| Commit: af6ddf8, github.com/apache/spark/pull/3828 |
| |
| [SPARK-5444][Network]Add a retry to deal with the conflict port in netty server. |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-02-06 14:35:29 -0800 |
| Commit: caca15a, github.com/apache/spark/pull/4240 |
| |
| [SPARK-4874] [CORE] Collect record count metrics |
| Kostas Sakellis <kostas@cloudera.com> |
| 2015-02-06 14:31:20 -0800 |
| Commit: 9fa29a6, github.com/apache/spark/pull/4067 |
| |
| [HOTFIX] Fix the maven build after adding sqlContext to spark-shell |
| Michael Armbrust <michael@databricks.com> |
| 2015-02-06 14:27:06 -0800 |
| Commit: 11dbf71, github.com/apache/spark/pull/4443 |
| |
| [SPARK-5600] [core] Clean up FsHistoryProvider test, fix app sort order. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-06 14:23:09 -0800 |
| Commit: 09feecc, github.com/apache/spark/pull/4370 |
| |
| SPARK-5633 pyspark saveAsTextFile support for compression codec |
| Vladimir Vladimirov <vladimir.vladimirov@magnetic.com> |
| 2015-02-06 13:55:02 -0800 |
| Commit: 1d32341, github.com/apache/spark/pull/4403 |
| |
| [HOTFIX][MLLIB] fix a compilation error with java 6 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-06 13:52:35 -0800 |
| Commit: 87e0f0d, github.com/apache/spark/pull/4442 |
| |
| [SPARK-4983] Insert waiting time before tagging EC2 instances |
| GenTang <gen.tang86@gmail.com>, Gen TANG <gen.tang86@gmail.com> |
| 2015-02-06 13:27:34 -0800 |
| Commit: 2872d83, github.com/apache/spark/pull/3986 |
| |
| [SPARK-5586][Spark Shell][SQL] Make `sqlContext` available in spark shell |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-06 13:20:10 -0800 |
| Commit: 2ef9853, github.com/apache/spark/pull/4387 |
| |
| [SPARK-5278][SQL] Introduce UnresolvedGetField and complete the check of ambiguous reference to fields |
| Wenchen Fan <cloud0fan@outlook.com> |
| 2015-02-06 13:08:09 -0800 |
| Commit: 1b148ad, github.com/apache/spark/pull/4068 |
| |
| [SQL][Minor] Remove cache keyword in SqlParser |
| wangfei <wangfei1@huawei.com> |
| 2015-02-06 12:42:23 -0800 |
| Commit: d822606, github.com/apache/spark/pull/4393 |
| |
| [SQL][HiveConsole][DOC] HiveConsole `correct hiveconsole imports` |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-06 12:41:28 -0800 |
| Commit: 2abaa6e, github.com/apache/spark/pull/4389 |
| |
| [SPARK-5595][SPARK-5603][SQL] Add a rule to do PreInsert type casting and field renaming and invalidating in memory cache after INSERT |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-06 12:38:07 -0800 |
| Commit: 3c34d62, github.com/apache/spark/pull/4373 |
| |
| [SPARK-5324][SQL] Results of describe can't be queried |
| OopsOutOfMemory <victorshengli@126.com>, Sheng, Li <OopsOutOfMemory@users.noreply.github.com> |
| 2015-02-06 12:33:20 -0800 |
| Commit: 0fc35da, github.com/apache/spark/pull/4249 |
| |
| [SPARK-5619][SQL] Support 'show roles' in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-02-06 12:29:26 -0800 |
| Commit: cc66a3c, github.com/apache/spark/pull/4397 |
| |
| [SPARK-5640] Synchronize ScalaReflection where necessary |
| Tobias Schlatter <tobias@meisch.ch> |
| 2015-02-06 12:15:02 -0800 |
| Commit: 779e28b, github.com/apache/spark/pull/4431 |
| |
| [SPARK-5650][SQL] Support optional 'FROM' clause |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-06 12:13:44 -0800 |
| Commit: 921121d, github.com/apache/spark/pull/4426 |
| |
| [SPARK-5628] Add version option to spark-ec2 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-06 12:08:22 -0800 |
| Commit: ab0ffde, github.com/apache/spark/pull/4414 |
| |
| [SPARK-2945][YARN][Doc]add doc for spark.executor.instances |
| WangTaoTheTonic <wangtao111@huawei.com> |
| 2015-02-06 11:57:02 -0800 |
| Commit: 540f474, github.com/apache/spark/pull/4350 |
| |
| [SPARK-4361][Doc] Add more docs for Hadoop Configuration |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-06 11:50:20 -0800 |
| Commit: 528dd34, github.com/apache/spark/pull/3225 |
| |
| [HOTFIX] Fix test build break in ExecutorAllocationManagerSuite. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:47:32 -0800 |
| Commit: 9e828f4 |
| |
| [SPARK-5652][Mllib] Use broadcasted weights in LogisticRegressionModel |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-06 11:22:11 -0800 |
| Commit: 6fda4c1, github.com/apache/spark/pull/4429 |
| |
| [SPARK-5555] Enable UISeleniumSuite tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:14:58 -0800 |
| Commit: 93fee7b, github.com/apache/spark/pull/4334 |
| |
| SPARK-2450 Adds executor log links to Web UI |
| Kostas Sakellis <kostas@cloudera.com>, Josh Rosen <joshrosen@databricks.com> |
| 2015-02-06 11:13:00 -0800 |
| Commit: e74dd04, github.com/apache/spark/pull/3486 |
| |
| [SPARK-5618][Spark Core][Minor] Optimise utility code. |
| Makoto Fukuhara <fukuo33@gmail.com> |
| 2015-02-06 11:11:38 -0800 |
| Commit: 3feb798, github.com/apache/spark/pull/4396 |
| |
| [SPARK-5593][Core]Replace BlockManagerListener with ExecutorListener in ExecutorAllocationListener |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 11:09:37 -0800 |
| Commit: 9387dc1, github.com/apache/spark/pull/4369 |
| |
| [SPARK-4877] Allow user first classes to extend classes in the parent. |
| Stephen Haberman <stephen@exigencecorp.com> |
| 2015-02-06 11:03:56 -0800 |
| Commit: 52386cf, github.com/apache/spark/pull/3725 |
| |
| [SPARK-5396] Syntax error in spark scripts on windows. |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-02-06 10:58:26 -0800 |
| Commit: 2dc94cd, github.com/apache/spark/pull/4428 |
| |
| [SPARK-5636] Ramp up faster in dynamic allocation |
| Andrew Or <andrew@databricks.com> |
| 2015-02-06 10:54:23 -0800 |
| Commit: 0a90305, github.com/apache/spark/pull/4409 |
| |
| SPARK-4337. [YARN] Add ability to cancel pending requests |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-06 10:53:16 -0800 |
| Commit: 1568391, github.com/apache/spark/pull/4141 |
| |
| [SPARK-5416] init Executor.threadPool before ExecutorSource |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-02-06 12:22:25 +0000 |
| Commit: f9bc4ef, github.com/apache/spark/pull/4212 |
| |
| [Build] Set all Debian package permissions to 755 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-06 11:38:39 +0000 |
| Commit: 3638216, github.com/apache/spark/pull/4277 |
| |
| Update ec2-scripts.md |
| Miguel Peralvo <miguel.peralvo@gmail.com> |
| 2015-02-06 11:04:48 +0000 |
| Commit: f6613fc, github.com/apache/spark/pull/4300 |
| |
| [SPARK-5470][Core]use defaultClassLoader to load classes in KryoSerializer |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 11:00:35 +0000 |
| Commit: 8007a4f, github.com/apache/spark/pull/4258 |
| |
| [SPARK-5653][YARN] In ApplicationMaster rename isDriver to isClusterMode |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-02-06 10:48:31 -0800 |
| Commit: 4ff8855, github.com/apache/spark/pull/4430 |
| |
| [SPARK-5582] [history] Ignore empty log directories. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-02-06 10:07:20 +0000 |
| Commit: faccdcb, github.com/apache/spark/pull/4352 |
| |
| [SPARK-5157][YARN] Configure more JVM options properly when we use ConcMarkSweepGC for AM. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-06 09:39:12 +0000 |
| Commit: 25d8044, github.com/apache/spark/pull/3956 |
| |
| [Minor] Remove permission for execution from spark-shell.cmd |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-06 09:33:36 +0000 |
| Commit: 7c54681, github.com/apache/spark/pull/3983 |
| |
| [SPARK-5380][GraphX] Solve an ArrayIndexOutOfBoundsException when build graph with a file format error |
| Leolh <leosandylh@gmail.com> |
| 2015-02-06 09:01:53 +0000 |
| Commit: ffdb2e9, github.com/apache/spark/pull/4176 |
| |
| [SPARK-5013] [MLlib] Added documentation and sample data file for GaussianMixture |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-02-06 10:26:51 -0800 |
| Commit: f408db6, github.com/apache/spark/pull/4401 |
| |
| [SPARK-4789] [SPARK-4942] [SPARK-5031] [mllib] Standardize ML Prediction APIs |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-05 23:43:47 -0800 |
| Commit: 45b95e7, github.com/apache/spark/pull/3637 |
| |
| [SPARK-5604][MLLIB] remove checkpointDir from trees |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 23:32:09 -0800 |
| Commit: c35a11e, github.com/apache/spark/pull/4407 |
| |
| [SPARK-5639][SQL] Support DataFrame.renameColumn. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 23:02:40 -0800 |
| Commit: 0639d3e, github.com/apache/spark/pull/4410 |
| |
| Revert "SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2." |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-05 18:37:55 -0800 |
| Commit: 6d31531 |
| |
| SPARK-5557: Explicitly include servlet API in dependencies. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-05 18:14:54 -0800 |
| Commit: 34131fd, github.com/apache/spark/pull/4411 |
| |
| [HOTFIX] [SQL] Disables Metastore Parquet table conversion for "SQLQuerySuite.CTAS with serde" |
| Cheng Lian <lian@databricks.com> |
| 2015-02-05 18:09:18 -0800 |
| Commit: ce6d8bb, github.com/apache/spark/pull/4413 |
| |
| [SPARK-5638][SQL] Add a config flag to disable eager analysis of DataFrames |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 18:07:10 -0800 |
| Commit: 4fd67e4, github.com/apache/spark/pull/4408 |
| |
| [SPARK-5620][DOC] group methods in generated unidoc |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 16:26:51 -0800 |
| Commit: e2be79d, github.com/apache/spark/pull/4404 |
| |
| [SPARK-5182] [SPARK-5528] [SPARK-5509] [SPARK-3575] [SQL] Parquet data source improvements |
| Cheng Lian <lian@databricks.com> |
| 2015-02-05 15:29:56 -0800 |
| Commit: 50c48eb, github.com/apache/spark/pull/4308 |
| |
| [SPARK-5604[MLLIB] remove checkpointDir from LDA |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-05 15:07:33 -0800 |
| Commit: 59798cb, github.com/apache/spark/pull/4390 |
| |
| [SPARK-5460][MLlib] Wrapped `Try` around `deleteAllCheckpoints` - RandomForest. |
| x1- <viva008@gmail.com> |
| 2015-02-05 15:02:04 -0800 |
| Commit: 44768f5, github.com/apache/spark/pull/4347 |
| |
| [SPARK-5135][SQL] Add support for describe table to DDL in SQLContext |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-05 13:07:48 -0800 |
| Commit: 55cebcf, github.com/apache/spark/pull/4227 |
| |
| [SPARK-5617][SQL] fix test failure of SQLQuerySuite |
| wangfei <wangfei1@huawei.com> |
| 2015-02-05 12:44:12 -0800 |
| Commit: 785a2e3, github.com/apache/spark/pull/4395 |
| |
| [Branch-1.3] [DOC] doc fix for date |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-05 12:42:27 -0800 |
| Commit: 17ef7f9, github.com/apache/spark/pull/4400 |
| |
| [SPARK-5474][Build]curl should support URL redirection in build/mvn |
| GuoQiang Li <witgo@qq.com> |
| 2015-02-05 12:03:13 -0800 |
| Commit: d1066e9, github.com/apache/spark/pull/4263 |
| |
| [HOTFIX] MLlib build break. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-05 00:42:50 -0800 |
| Commit: c83d118 |
| |
| SPARK-5548: Fixed a race condition in AkkaUtilsSuite |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-05 12:00:04 -0800 |
| Commit: fba2dc6, github.com/apache/spark/pull/4343 |
| |
| [SPARK-5608] Improve SEO of Spark documentation pages |
| Matei Zaharia <matei@databricks.com> |
| 2015-02-05 11:12:50 -0800 |
| Commit: de112a2, github.com/apache/spark/pull/4381 |
| |
| SPARK-4687. Add a recursive option to the addFile API |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-05 10:15:55 -0800 |
| Commit: c22ccc0, github.com/apache/spark/pull/3670 |
| |
| [MLlib] Minor: UDF style update. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 23:57:53 -0800 |
| Commit: 4074674, github.com/apache/spark/pull/4388 |
| |
| [SPARK-5612][SQL] Move DataFrame implicit functions into SQLContext.implicits. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 23:44:34 -0800 |
| Commit: 0040b61, github.com/apache/spark/pull/4386 |
| |
| [SPARK-5606][SQL] Support plus sign in HiveContext |
| q00251598 <qiyadong@huawei.com> |
| 2015-02-04 23:16:01 -0800 |
| Commit: bf43781, github.com/apache/spark/pull/4378 |
| |
| [SPARK-5599] Check MLlib public APIs for 1.3 |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-04 23:03:47 -0800 |
| Commit: abc184e, github.com/apache/spark/pull/4377 |
| |
| [SPARK-5596] [mllib] ML model import/export for GLMs, NaiveBayes |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-04 22:46:48 -0800 |
| Commit: 885bcbb, github.com/apache/spark/pull/4233 |
| |
| SPARK-5607: Update to Kryo 2.24.0 to avoid including objenesis 1.2. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-04 22:39:44 -0800 |
| Commit: 59fb5c7, github.com/apache/spark/pull/4383 |
| |
| [SPARK-5602][SQL] Better support for creating DataFrame from local data collection |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:53:57 -0800 |
| Commit: b8f9c00, github.com/apache/spark/pull/4372 |
| |
| [SPARK-5538][SQL] Fix flaky CachedTableSuite |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:52:41 -0800 |
| Commit: 1901b19, github.com/apache/spark/pull/4379 |
| |
| [SQL][DataFrame] Minor cleanup. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 19:51:48 -0800 |
| Commit: f05bfa6, github.com/apache/spark/pull/4374 |
| |
| [SPARK-4520] [SQL] This pr fixes the ArrayIndexOutOfBoundsException as r... |
| Sadhan Sood <sadhan@tellapart.com> |
| 2015-02-04 19:18:06 -0800 |
| Commit: aa6f4ca, github.com/apache/spark/pull/4148 |
| |
| [SPARK-5605][SQL][DF] Allow using String to specify colum name in DSL aggregate functions |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-04 18:35:51 -0800 |
| Commit: 478ee3f, github.com/apache/spark/pull/4376 |
| |
| [SPARK-5411] Allow SparkListeners to be specified in SparkConf and loaded when creating SparkContext |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-04 17:18:03 -0800 |
| Commit: 47e4d57, github.com/apache/spark/pull/4111 |
| |
| [SPARK-5577] Python udf for DataFrame |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 15:55:09 -0800 |
| Commit: dc9ead9, github.com/apache/spark/pull/4351 |
| |
| [SPARK-5118][SQL] Fix: create table test stored as parquet as select .. |
| guowei2 <guowei2@asiainfo.com> |
| 2015-02-04 15:26:10 -0800 |
| Commit: 06da868, github.com/apache/spark/pull/3921 |
| |
| [SQL] Use HiveContext's sessionState in HiveMetastoreCatalog.hiveDefaultTableFilePath |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-04 15:22:40 -0800 |
| Commit: cb4c3e5, github.com/apache/spark/pull/4355 |
| |
| [SQL] Correct the default size of TimestampType and expose NumericType |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-04 15:14:49 -0800 |
| Commit: 513bb2c, github.com/apache/spark/pull/4314 |
| |
| [SQL][Hiveconsole] Bring hive console code up to date and update README.md |
| OopsOutOfMemory <victorshengli@126.com>, Sheng, Li <OopsOutOfMemory@users.noreply.github.com> |
| 2015-02-04 15:13:54 -0800 |
| Commit: 2cdcfe3, github.com/apache/spark/pull/4330 |
| |
| [SPARK-5367][SQL] Support star expression in udfs |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-02-04 15:12:07 -0800 |
| Commit: 8b803f6, github.com/apache/spark/pull/4353 |
| |
| [SPARK-5426][SQL] Add SparkSQL Java API helper methods. |
| kul <kuldeep.bora@gmail.com> |
| 2015-02-04 15:08:37 -0800 |
| Commit: 38ab92e, github.com/apache/spark/pull/4243 |
| |
| [SPARK-5587][SQL] Support change database owner |
| wangfei <wangfei1@huawei.com> |
| 2015-02-04 14:35:12 -0800 |
| Commit: 7920791, github.com/apache/spark/pull/4357 |
| |
| [SPARK-5591][SQL] Fix NoSuchObjectException for CTAS |
| wangfei <wangfei1@huawei.com> |
| 2015-02-04 14:33:07 -0800 |
| Commit: c79dd1e, github.com/apache/spark/pull/4365 |
| |
| [SPARK-4939] move to next locality when no pending tasks |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 14:22:07 -0800 |
| Commit: f9bb3cb, github.com/apache/spark/pull/3779 |
| |
| [SPARK-4707][STREAMING] Reliable Kafka Receiver can lose data if the blo... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2015-02-04 14:20:44 -0800 |
| Commit: 14c9f32, github.com/apache/spark/pull/3655 |
| |
| [SPARK-4964] [Streaming] Exactly-once semantics for Kafka |
| cody koeninger <cody@koeninger.org> |
| 2015-02-04 12:06:34 -0800 |
| Commit: a119cae, github.com/apache/spark/pull/3798 |
| |
| [SPARK-5588] [SQL] support select/filter by SQL expression |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 11:34:46 -0800 |
| Commit: 950a0d3, github.com/apache/spark/pull/4359 |
| |
| [SPARK-5585] Flaky test in MLlib python |
| Davies Liu <davies@databricks.com> |
| 2015-02-04 08:54:20 -0800 |
| Commit: 84c6273, github.com/apache/spark/pull/4358 |
| |
| [SPARK-5574] use given name prefix in dir |
| Imran Rashid <irashid@cloudera.com> |
| 2015-02-04 01:02:20 -0800 |
| Commit: 5d9278a, github.com/apache/spark/pull/4344 |
| |
| [Minor] Fix incorrect warning log |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-04 00:52:41 -0800 |
| Commit: 316a4bb, github.com/apache/spark/pull/4360 |
| |
| [SPARK-5379][Streaming] Add awaitTerminationOrTimeout |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-04 00:40:28 -0800 |
| Commit: 4d3dbfd, github.com/apache/spark/pull/4171 |
| |
| [SPARK-5341] Use maven coordinates as dependencies in spark-shell and spark-submit |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-02-03 22:39:17 -0800 |
| Commit: 3b7acd2, github.com/apache/spark/pull/4215 |
| |
| [SPARK-4939] revive offers periodically in LocalBackend |
| Davies Liu <davies@databricks.com> |
| 2015-02-03 22:30:23 -0800 |
| Commit: e196da8, github.com/apache/spark/pull/4147 |
| |
| [SPARK-4969][STREAMING][PYTHON] Add binaryRecords to streaming |
| freeman <the.freeman.lab@gmail.com> |
| 2015-02-03 22:24:30 -0800 |
| Commit: 9a33f89, github.com/apache/spark/pull/3803 |
| |
| [SPARK-5579][SQL][DataFrame] Support for project/filter using SQL expressions |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 22:15:35 -0800 |
| Commit: cb7f783, github.com/apache/spark/pull/4348 |
| |
| [FIX][MLLIB] fix seed handling in Python GMM |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-03 20:39:11 -0800 |
| Commit: 679228b, github.com/apache/spark/pull/4349 |
| |
| [SPARK-4795][Core] Redesign the "primitive type => Writable" implicit APIs to make them be activated automatically |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-03 20:17:12 -0800 |
| Commit: 5c63e05, github.com/apache/spark/pull/3642 |
| |
| [SPARK-5578][SQL][DataFrame] Provide a convenient way for Scala users to use UDFs |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 20:07:46 -0800 |
| Commit: b22d5b5, github.com/apache/spark/pull/4345 |
| |
| [SPARK-5520][MLlib] Make FP-Growth implementation take generic item types (WIP) |
| Jacky Li <jacky.likun@huawei.com>, Jacky Li <jackylk@users.noreply.github.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-03 17:02:42 -0800 |
| Commit: 298ef5b, github.com/apache/spark/pull/4340 |
| |
| [SPARK-5554] [SQL] [PySpark] add more tests for DataFrame Python API |
| Davies Liu <davies@databricks.com> |
| 2015-02-03 16:01:56 -0800 |
| Commit: 4640623, github.com/apache/spark/pull/4331 |
| |
| [STREAMING] SPARK-4986 Wait for receivers to deregister and receiver job to terminate |
| Jesper Lundgren <jesper.lundgren@vpon.com> |
| 2015-02-03 14:53:39 -0800 |
| Commit: 092d4ba, github.com/apache/spark/pull/4338 |
| |
| [SPARK-5153][Streaming][Test] Increased timeout to deal with flaky KafkaStreamSuite |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2015-02-03 13:46:02 -0800 |
| Commit: d644bd9, github.com/apache/spark/pull/4342 |
| |
| [SPARK-4508] [SQL] build native date type to conform behavior to Hive |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-03 12:21:45 -0800 |
| Commit: 6e244cf, github.com/apache/spark/pull/4325 |
| |
| [SPARK-5383][SQL] Support alias for udtfs |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2015-02-03 12:16:31 -0800 |
| Commit: 5dbeb21, github.com/apache/spark/pull/4186 |
| |
| [SPARK-5550] [SQL] Support the case insensitive for UDF |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-02-03 12:12:26 -0800 |
| Commit: 654c992, github.com/apache/spark/pull/4326 |
| |
| [SPARK-4987] [SQL] parquet timestamp type support |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-03 12:06:06 -0800 |
| Commit: 67d5220, github.com/apache/spark/pull/3820 |
| |
| [SQL] DataFrame API update |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 10:34:56 -0800 |
| Commit: 4204a12, github.com/apache/spark/pull/4332 |
| |
| Minor: Fix TaskContext deprecated annotations. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 10:34:16 -0800 |
| Commit: f7948f3, github.com/apache/spark/pull/4333 |
| |
| [SPARK-5549] Define TaskContext interface in Scala. |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 00:46:04 -0800 |
| Commit: bebf4c4, github.com/apache/spark/pull/4324 |
| |
| [SPARK-5551][SQL] Create type alias for SchemaRDD for source backward compatibility |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 00:29:23 -0800 |
| Commit: 523a935, github.com/apache/spark/pull/4327 |
| |
| [SQL][DataFrame] Remove DataFrameApi, ExpressionApi, and GroupedDataFrameApi |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-03 00:29:04 -0800 |
| Commit: 37df330, github.com/apache/spark/pull/4328 |
| |
| [minor] update streaming linear algorithms |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-03 00:14:43 -0800 |
| Commit: 659329f, github.com/apache/spark/pull/4329 |
| |
| [SPARK-1405] [mllib] Latent Dirichlet Allocation (LDA) using EM |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-02 23:57:35 -0800 |
| Commit: 980764f, github.com/apache/spark/pull/2388 |
| |
| [SPARK-5536] replace old ALS implementation by the new one |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-02 23:49:09 -0800 |
| Commit: 0cc7b88, github.com/apache/spark/pull/4321 |
| |
| [SPARK-5414] Add SparkFirehoseListener class for consuming all SparkListener events |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-02-02 23:35:07 -0800 |
| Commit: b8ebebe, github.com/apache/spark/pull/4210 |
| |
| [SPARK-5501][SPARK-5420][SQL] Write support for the data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-02-02 23:30:44 -0800 |
| Commit: 13531dd, github.com/apache/spark/pull/4294 |
| |
| [SPARK-5012][MLLib][PySpark]Python API for Gaussian Mixture Model |
| FlytxtRnD <meethu.mathew@flytxt.com> |
| 2015-02-02 23:04:55 -0800 |
| Commit: 50a1a87, github.com/apache/spark/pull/4059 |
| |
| [SPARK-3778] newAPIHadoopRDD doesn't properly pass credentials for secure hdfs |
| Thomas Graves <tgraves@apache.org> |
| 2015-02-02 22:45:55 -0800 |
| Commit: c31c36c, github.com/apache/spark/pull/4292 |
| |
| [SPARK-4979][MLLIB] Streaming logisitic regression |
| freeman <the.freeman.lab@gmail.com> |
| 2015-02-02 22:42:15 -0800 |
| Commit: eb0da6c, github.com/apache/spark/pull/4306 |
| |
| [SPARK-5219][Core] Add locks to avoid scheduling race conditions |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-02 21:42:18 -0800 |
| Commit: c306555, github.com/apache/spark/pull/4019 |
| |
| [Doc] Minor: Fixes several formatting issues |
| Cheng Lian <lian@databricks.com> |
| 2015-02-02 21:14:21 -0800 |
| Commit: 60f67e7, github.com/apache/spark/pull/4316 |
| |
| SPARK-3996: Add jetty servlet and continuations. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-02 21:01:36 -0800 |
| Commit: 7930d2b, github.com/apache/spark/pull/4323 |
| |
| SPARK-5542: Decouple publishing, packaging, and tagging in release script |
| Patrick Wendell <patrick@databricks.com>, Patrick Wendell <pwendell@gmail.com> |
| 2015-02-02 21:00:30 -0800 |
| Commit: 0ef38f5, github.com/apache/spark/pull/4319 |
| |
| [SPARK-5543][WebUI] Remove unused import JsonUtil from from JsonProtocol |
| nemccarthy <nathan@nemccarthy.me> |
| 2015-02-02 20:03:13 -0800 |
| Commit: cb39f12, github.com/apache/spark/pull/4320 |
| |
| [SPARK-5472][SQL] A JDBC data source for Spark SQL. |
| Tor Myklebust <tmyklebu@gmail.com> |
| 2015-02-02 19:50:14 -0800 |
| Commit: 8f471a6, github.com/apache/spark/pull/4261 |
| |
| [SPARK-5512][Mllib] Run the PIC algorithm with initial vector suggected by the PIC paper |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-02 19:34:25 -0800 |
| Commit: 1bcd465, github.com/apache/spark/pull/4301 |
| |
| [SPARK-5154] [PySpark] [Streaming] Kafka streaming support in Python |
| Davies Liu <davies@databricks.com>, Tathagata Das <tdas@databricks.com> |
| 2015-02-02 19:16:27 -0800 |
| Commit: 0561c45, github.com/apache/spark/pull/3715 |
| |
| [SQL] Improve DataFrame API error reporting |
| Reynold Xin <rxin@databricks.com>, Davies Liu <davies@databricks.com> |
| 2015-02-02 19:01:47 -0800 |
| Commit: 554403f, github.com/apache/spark/pull/4296 |
| |
| Revert "[SPARK-4508] [SQL] build native date type to conform behavior to Hive" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-02 17:52:17 -0800 |
| Commit: eccb9fb |
| |
| Spark 3883: SSL support for HttpServer and Akka |
| Jacek Lewandowski <lewandowski.jacek@gmail.com>, Jacek Lewandowski <jacek.lewandowski@datastax.com> |
| 2015-02-02 17:18:54 -0800 |
| Commit: cfea300, github.com/apache/spark/pull/3571 |
| |
| [SPARK-5540] hide ALS.solveLeastSquares |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-02 17:10:01 -0800 |
| Commit: ef65cf0, github.com/apache/spark/pull/4318 |
| |
| [SPARK-5534] [graphx] Graph getStorageLevel fix |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-02 17:02:29 -0800 |
| Commit: f133dec, github.com/apache/spark/pull/4317 |
| |
| [SPARK-5514] DataFrame.collect should call executeCollect |
| Reynold Xin <rxin@databricks.com> |
| 2015-02-02 16:55:36 -0800 |
| Commit: 8aa3cff, github.com/apache/spark/pull/4313 |
| |
| [SPARK-5195][sql]Update HiveMetastoreCatalog.scala(override the MetastoreRelation's sameresult method only compare databasename and table name) |
| seayi <405078363@qq.com>, Michael Armbrust <michael@databricks.com> |
| 2015-02-02 16:06:52 -0800 |
| Commit: dca6faa, github.com/apache/spark/pull/3898 |
| |
| [SPARK-2309][MLlib] Multinomial Logistic Regression |
| DB Tsai <dbtsai@alpinenow.com> |
| 2015-02-02 15:59:15 -0800 |
| Commit: b1aa8fe, github.com/apache/spark/pull/3833 |
| |
| [SPARK-5513][MLLIB] Add nonnegative option to ml's ALS |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-02 15:55:44 -0800 |
| Commit: 46d50f1, github.com/apache/spark/pull/4302 |
| |
| [SPARK-4508] [SQL] build native date type to conform behavior to Hive |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-02 15:49:22 -0800 |
| Commit: 1646f89, github.com/apache/spark/pull/3732 |
| |
| SPARK-5500. Document that feeding hadoopFile into a shuffle operation wi... |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-02 14:52:46 -0800 |
| Commit: 8309349, github.com/apache/spark/pull/4293 |
| |
| [SPARK-5461] [graphx] Add isCheckpointed, getCheckpointedFiles methods to Graph |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-02-02 14:34:48 -0800 |
| Commit: 842d000, github.com/apache/spark/pull/4253 |
| |
| SPARK-5425: Use synchronised methods in system properties to create SparkConf |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-02-02 14:07:19 -0800 |
| Commit: 5a55261, github.com/apache/spark/pull/4222 |
| |
| Disabling Utils.chmod700 for Windows |
| Martin Weindel <martin.weindel@gmail.com>, mweindel <m.weindel@usu-software.de> |
| 2015-02-02 13:46:18 -0800 |
| Commit: bff65b5, github.com/apache/spark/pull/4299 |
| |
| Make sure only owner can read / write to directories created for the job. |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-21 14:38:14 -0800 |
| Commit: 52f5754 |
| |
| [HOTFIX] Add jetty references to build for YARN module. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-02 14:00:14 -0800 |
| Commit: 2321dd1 |
| |
| [SPARK-4631][streaming][FIX] Wait for a receiver to start before publishing test data. |
| Iulian Dragos <jaguarul@gmail.com> |
| 2015-02-02 14:00:33 -0800 |
| Commit: e908322, github.com/apache/spark/pull/4270 |
| |
| [SPARK-5212][SQL] Add support of schema-less, custom field delimiter and SerDe for HiveQL transform |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-02 13:53:55 -0800 |
| Commit: 683e938, github.com/apache/spark/pull/4014 |
| |
| [SPARK-5530] Add executor container to executorIdToContainer |
| Xutingjun <1039320815@qq.com> |
| 2015-02-02 12:37:51 -0800 |
| Commit: 62a93a1, github.com/apache/spark/pull/4309 |
| |
| [Docs] Fix Building Spark link text |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-02-02 12:33:49 -0800 |
| Commit: 3f941b6, github.com/apache/spark/pull/4312 |
| |
| [SPARK-5173]support python application running on yarn cluster mode |
| lianhuiwang <lianhuiwang09@gmail.com>, Wang Lianhui <lianhuiwang09@gmail.com> |
| 2015-02-02 12:32:28 -0800 |
| Commit: f5e6375, github.com/apache/spark/pull/3976 |
| |
| SPARK-4585. Spark dynamic executor allocation should use minExecutors as... |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-02 12:27:08 -0800 |
| Commit: b2047b5, github.com/apache/spark/pull/4051 |
| |
| [MLLIB] SPARK-5491 (ex SPARK-1473): Chi-square feature selection |
| Alexander Ulanov <nashb@yandex.ru> |
| 2015-02-02 12:13:05 -0800 |
| Commit: c081b21, github.com/apache/spark/pull/1484 |
| |
| SPARK-5492. Thread statistics can break with older Hadoop versions |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-02-02 00:54:06 -0800 |
| Commit: 6f34131, github.com/apache/spark/pull/4305 |
| |
| [SPARK-5478][UI][Minor] Add missing right parentheses |
| jerryshao <saisai.shao@intel.com> |
| 2015-02-01 23:56:13 -0800 |
| Commit: 63dfe21, github.com/apache/spark/pull/4267 |
| |
| [SPARK-5353] Log failures in REPL class loading |
| Tobias Schlatter <tobias@meisch.ch> |
| 2015-02-01 21:43:49 -0800 |
| Commit: 9f0a6e1, github.com/apache/spark/pull/4130 |
| |
| [SPARK-3996]: Shade Jetty in Spark deliverables |
| Patrick Wendell <patrick@databricks.com> |
| 2015-02-01 21:13:57 -0800 |
| Commit: a15f6e3, github.com/apache/spark/pull/4285 |
| |
| [SPARK-4001][MLlib] adding parallel FP-Growth algorithm for frequent pattern mining in MLlib |
| Jacky Li <jacky.likun@huawei.com>, Jacky Li <jackylk@users.noreply.github.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-02-01 20:07:25 -0800 |
| Commit: 859f724, github.com/apache/spark/pull/2847 |
| |
| [Spark-5406][MLlib] LocalLAPACK mode in RowMatrix.computeSVD should have much smaller upper bound |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-02-01 19:40:26 -0800 |
| Commit: d85cd4e, github.com/apache/spark/pull/4200 |
| |
| [SPARK-5465] [SQL] Fixes filter push-down for Parquet data source |
| Cheng Lian <lian@databricks.com> |
| 2015-02-01 18:52:39 -0800 |
| Commit: ec10032, github.com/apache/spark/pull/4255 |
| |
| [SPARK-5262] [SPARK-5244] [SQL] add coalesce in SQLParser and widen types for parameters of coalesce |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-02-01 18:51:38 -0800 |
| Commit: 8cf4a1f, github.com/apache/spark/pull/4057 |
| |
| [SPARK-5196][SQL] Support `comment` in Create Table Field DDL |
| OopsOutOfMemory <victorshengli@126.com> |
| 2015-02-01 18:41:49 -0800 |
| Commit: 1b56f1d, github.com/apache/spark/pull/3999 |
| |
| [SPARK-1825] Make Windows Spark client work fine with Linux YARN cluster |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-02-01 18:26:28 -0800 |
| Commit: 7712ed5, github.com/apache/spark/pull/3943 |
| |
| [SPARK-5176] The thrift server does not support cluster mode |
| Tom Panning <tom.panning@nextcentury.com> |
| 2015-02-01 17:57:31 -0800 |
| Commit: 1ca0a10, github.com/apache/spark/pull/4137 |
| |
| [SPARK-5155] Build fails with spark-ganglia-lgpl profile |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-02-01 17:53:56 -0800 |
| Commit: c80194b, github.com/apache/spark/pull/4303 |
| |
| [Minor][SQL] Little refactor DataFrame related codes |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-02-01 17:52:18 -0800 |
| Commit: ef89b82, github.com/apache/spark/pull/4298 |
| |
| [SPARK-4859][Core][Streaming] Refactor LiveListenerBus and StreamingListenerBus |
| zsxwing <zsxwing@gmail.com> |
| 2015-02-01 17:47:51 -0800 |
| Commit: 883bc88, github.com/apache/spark/pull/4006 |
| |
| [SPARK-5424][MLLIB] make the new ALS impl take generic ID types |
| Xiangrui Meng <meng@databricks.com> |
| 2015-02-01 14:13:31 -0800 |
| Commit: 4a17122, github.com/apache/spark/pull/4281 |
| |
| [SPARK-5207] [MLLIB] StandardScalerModel mean and variance re-use |
| Octavian Geagla <ogeagla@gmail.com> |
| 2015-02-01 09:21:14 -0800 |
| Commit: bdb0680, github.com/apache/spark/pull/4140 |
| |
| [SPARK-5422] Add support for sending Graphite metrics via UDP |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-01-31 23:41:05 -0800 |
| Commit: 80bd715, github.com/apache/spark/pull/4218 |
| |
| SPARK-3359 [CORE] [DOCS] `sbt/sbt unidoc` doesn't work with Java 8 |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-31 10:40:42 -0800 |
| Commit: c84d5a1, github.com/apache/spark/pull/4193 |
| |
| [SPARK-3975] Added support for BlockMatrix addition and multiplication |
| Burak Yavuz <brkyvz@gmail.com>, Burak Yavuz <brkyvz@dn51t42l.sunet>, Burak Yavuz <brkyvz@dn51t4rd.sunet>, Burak Yavuz <brkyvz@dn0a221430.sunet>, Burak Yavuz <brkyvz@dn0a22b17d.sunet> |
| 2015-01-31 00:47:30 -0800 |
| Commit: ef8974b, github.com/apache/spark/pull/4274 |
| |
| [MLLIB][SPARK-3278] Monotone (Isotonic) regression using parallel pool adjacent violators algorithm |
| martinzapletal <zapletal-martin@email.cz>, Xiangrui Meng <meng@databricks.com>, Martin Zapletal <zapletal-martin@email.cz> |
| 2015-01-31 00:46:02 -0800 |
| Commit: 34250a6, github.com/apache/spark/pull/3519 |
| |
| [SPARK-5307] Add a config option for SerializationDebugger. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-31 00:06:36 -0800 |
| Commit: 6364083, github.com/apache/spark/pull/4297 |
| |
| [SQL] remove redundant field "childOutput" from execution.Aggregate, use child.output instead |
| kai <kaizeng@eecs.berkeley.edu> |
| 2015-01-30 23:19:10 -0800 |
| Commit: f54c9f6, github.com/apache/spark/pull/4291 |
| |
| [SPARK-5307] SerializationDebugger |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-30 22:34:10 -0800 |
| Commit: 740a568, github.com/apache/spark/pull/4098 |
| |
| [SPARK-5504] [sql] convertToCatalyst should support nested arrays |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-01-30 15:40:14 -0800 |
| Commit: e643de4, github.com/apache/spark/pull/4295 |
| |
| SPARK-5400 [MLlib] Changed name of GaussianMixtureEM to GaussianMixture |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-01-30 15:32:25 -0800 |
| Commit: 9869773, github.com/apache/spark/pull/4290 |
| |
| [SPARK-4259][MLlib]: Add Power Iteration Clustering Algorithm with Gaussian Similarity Function |
| sboeschhuawei <stephen.boesch@huawei.com>, Fan Jiang <fanjiang.sc@huawei.com>, Jiang Fan <fjiang6@gmail.com>, Stephen Boesch <stephen.boesch@huawei.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-01-30 14:09:49 -0800 |
| Commit: f377431, github.com/apache/spark/pull/4254 |
| |
| [SPARK-5486] Added validate method to BlockMatrix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-01-30 13:59:10 -0800 |
| Commit: 6ee8338, github.com/apache/spark/pull/4279 |
| |
| [SPARK-5496][MLLIB] Allow both classification and Classification in Algo for trees. |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-30 10:08:07 -0800 |
| Commit: 0a95085, github.com/apache/spark/pull/4287 |
| |
| [MLLIB] SPARK-4846: throw a RuntimeException and give users hints to increase the minCount |
| Joseph J.C. Tang <jinntrance@gmail.com> |
| 2015-01-30 10:07:26 -0800 |
| Commit: 54d9575, github.com/apache/spark/pull/4247 |
| |
| SPARK-5393. Flood of util.RackResolver log messages after SPARK-1714 |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-30 11:31:54 -0600 |
| Commit: 254eaa4, github.com/apache/spark/pull/4192 |
| |
| [SPARK-5457][SQL] Add missing DSL for ApproxCountDistinct. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2015-01-30 01:21:35 -0800 |
| Commit: 6f21dce, github.com/apache/spark/pull/4250 |
| |
| [SPARK-5094][MLlib] Add Python API for Gradient Boosted Trees |
| Kazuki Taniguchi <kazuki.t.1018@gmail.com> |
| 2015-01-30 00:39:44 -0800 |
| Commit: bc1fc9b, github.com/apache/spark/pull/3951 |
| |
| [SPARK-5322] Added transpose functionality to BlockMatrix |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-01-29 21:26:29 -0800 |
| Commit: dd4d84c, github.com/apache/spark/pull/4275 |
| |
| [SQL] Support df("*") to select all columns in a data frame. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-29 19:09:08 -0800 |
| Commit: 80def9d, github.com/apache/spark/pull/4283 |
| |
| [SPARK-5462] [SQL] Use analyzed query plan in DataFrame.apply() |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-29 18:23:05 -0800 |
| Commit: 22271f9, github.com/apache/spark/pull/4282 |
| |
| [SPARK-5395] [PySpark] fix python process leak while coalesce() |
| Davies Liu <davies@databricks.com> |
| 2015-01-29 17:28:37 -0800 |
| Commit: 5c746ee, github.com/apache/spark/pull/4238 |
| |
| [SQL] DataFrame API improvements |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-29 17:24:00 -0800 |
| Commit: ce9c43b, github.com/apache/spark/pull/4280 |
| |
| Revert "[WIP] [SPARK-3996]: Shade Jetty in Spark deliverables" |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-29 17:14:27 -0800 |
| Commit: d2071e8 |
| |
| remove 'return' |
| Yoshihiro Shimizu <shimizu@amoad.com> |
| 2015-01-29 16:55:00 -0800 |
| Commit: 5338772, github.com/apache/spark/pull/4268 |
| |
| [WIP] [SPARK-3996]: Shade Jetty in Spark deliverables |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-29 16:31:19 -0800 |
| Commit: f240fe3, github.com/apache/spark/pull/4252 |
| |
| [SPARK-5464] Fix help() for Python DataFrame instances |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-29 16:23:20 -0800 |
| Commit: 0bb15f2, github.com/apache/spark/pull/4278 |
| |
| [SPARK-4296][SQL] Trims aliases when resolving and checking aggregate expressions |
| Yin Huai <yhuai@databricks.com>, Cheng Lian <lian@databricks.com> |
| 2015-01-29 15:49:34 -0800 |
| Commit: c00d517, github.com/apache/spark/pull/4010 |
| |
| [SPARK-5373][SQL] Literal in agg grouping expressions leads to incorrect result |
| wangfei <wangfei1@huawei.com> |
| 2015-01-29 15:47:13 -0800 |
| Commit: c1b3eeb, github.com/apache/spark/pull/4169 |
| |
| [SPARK-5367][SQL] Support star expression in udf |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2015-01-29 15:44:53 -0800 |
| Commit: fbaf9e0, github.com/apache/spark/pull/4163 |
| |
| [SPARK-4786][SQL]: Parquet filter pushdown for castable types |
| Yash Datta <Yash.Datta@guavus.com> |
| 2015-01-29 15:42:23 -0800 |
| Commit: de221ea, github.com/apache/spark/pull/4156 |
| |
| [SPARK-5309][SQL] Add support for dictionaries in PrimitiveConverter for Strin... |
| Michael Davies <Michael.BellDavies@gmail.com> |
| 2015-01-29 15:40:59 -0800 |
| Commit: 940f375, github.com/apache/spark/pull/4187 |
| |
| [SPARK-5429][SQL] Use javaXML plan serialization for Hive golden answers on Hive 0.13.1 |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-29 15:28:22 -0800 |
| Commit: bce0ba1, github.com/apache/spark/pull/4223 |
| |
| [SPARK-5445][SQL] Consolidate Java and Scala DSL static methods. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-29 15:13:09 -0800 |
| Commit: 7156322, github.com/apache/spark/pull/4276 |
| |
| [SPARK-5466] Add explicit guava dependencies where needed. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-01-29 13:00:45 -0800 |
| Commit: f9e5694, github.com/apache/spark/pull/4272 |
| |
| [SPARK-5477] refactor stat.py |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-29 10:11:44 -0800 |
| Commit: a3dc618, github.com/apache/spark/pull/4266 |
| |
| [SQL] Various DataFrame DSL update. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-29 00:01:10 -0800 |
| Commit: 5ad78f6, github.com/apache/spark/pull/4260 |
| |
| [SPARK-3977] Conversion methods for BlockMatrix to other Distributed Matrices |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-01-28 23:42:07 -0800 |
| Commit: a63be1a, github.com/apache/spark/pull/4256 |
| |
| [SPARK-5445][SQL] Made DataFrame dsl usable in Java |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-28 19:10:32 -0800 |
| Commit: 5b9760d, github.com/apache/spark/pull/4241 |
| |
| [SPARK-5430] move treeReduce and treeAggregate from mllib to core |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-28 17:26:03 -0800 |
| Commit: 4ee79c7, github.com/apache/spark/pull/4228 |
| |
| [SPARK-4586][MLLIB] Python API for ML pipeline and parameters |
| Xiangrui Meng <meng@databricks.com>, Davies Liu <davies@databricks.com> |
| 2015-01-28 17:14:23 -0800 |
| Commit: e80dc1c, github.com/apache/spark/pull/4151 |
| |
| [SPARK-5441][pyspark] Make SerDeUtil PairRDD to Python conversions more robust |
| Michael Nazario <mnazario@palantir.com> |
| 2015-01-28 13:55:01 -0800 |
| Commit: e023112, github.com/apache/spark/pull/4236 |
| |
| [SPARK-4387][PySpark] Refactoring python profiling code to make it extensible |
| Yandu Oppacher <yandu.oppacher@jadedpixel.com>, Davies Liu <davies@databricks.com> |
| 2015-01-28 13:48:06 -0800 |
| Commit: 3bead67, github.com/apache/spark/pull/3255. |
| |
| [SPARK-5417] Remove redundant executor-id set() call |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-01-28 13:04:52 -0800 |
| Commit: a731314, github.com/apache/spark/pull/4213 |
| |
| [SPARK-5434] [EC2] Preserve spaces in EC2 path |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-01-28 12:56:03 -0800 |
| Commit: d44ee43, github.com/apache/spark/pull/4224 |
| |
| [SPARK-5437] Fix DriverSuite and SparkSubmitSuite timeout issues |
| Andrew Or <andrew@databricks.com> |
| 2015-01-28 12:52:31 -0800 |
| Commit: 84b6ecd, github.com/apache/spark/pull/4230 |
| |
| [SPARK-4955]With executor dynamic scaling enabled,executor shoude be added or killed in yarn-cluster mode. |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-01-28 12:50:57 -0800 |
| Commit: 81f8f34, github.com/apache/spark/pull/3962 |
| |
| [SPARK-5440][pyspark] Add toLocalIterator to pyspark rdd |
| Michael Nazario <mnazario@palantir.com> |
| 2015-01-28 12:47:12 -0800 |
| Commit: 456c11f, github.com/apache/spark/pull/4237 |
| |
| SPARK-1934 [CORE] "this" reference escape to "selectorThread" during construction in ConnectionManager |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-28 12:44:35 -0800 |
| Commit: 9b18009, github.com/apache/spark/pull/4225 |
| |
| [SPARK-5188][BUILD] make-distribution.sh should support curl, not only wget to get Tachyon |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-28 12:43:22 -0800 |
| Commit: e902dc4, github.com/apache/spark/pull/3988 |
| |
| SPARK-5458. Refer to aggregateByKey instead of combineByKey in docs |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-28 12:41:23 -0800 |
| Commit: 406f6d3, github.com/apache/spark/pull/4251 |
| |
| [SPARK-5447][SQL] Replaced reference to SchemaRDD with DataFrame. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-28 12:10:01 -0800 |
| Commit: c8e934e, github.com/apache/spark/pull/4242 |
| |
| [SPARK-5361]Multiple Java RDD <-> Python RDD conversions not working correctly |
| Winston Chen <wchen@quid.com> |
| 2015-01-28 11:08:44 -0800 |
| Commit: 453d799, github.com/apache/spark/pull/4146 |
| |
| [SPARK-5291][CORE] Add timestamp and reason why an executor is removed to SparkListenerExecutorAdded and SparkListenerExecutorRemoved |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-28 11:02:51 -0800 |
| Commit: 0b35fcd, github.com/apache/spark/pull/4082 |
| |
| [SPARK-3974][MLlib] Distributed Block Matrix Abstractions |
| Burak Yavuz <brkyvz@gmail.com>, Xiangrui Meng <meng@databricks.com>, Burak Yavuz <brkyvz@dn51t42l.sunet>, Burak Yavuz <brkyvz@dn51t4rd.sunet>, Burak Yavuz <brkyvz@dn0a221430.sunet> |
| 2015-01-28 10:06:37 -0800 |
| Commit: eeb53bf, github.com/apache/spark/pull/3200 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-28 02:15:14 -0800 |
| Commit: 622ff09, github.com/apache/spark/pull/1480 |
| |
| [SPARK-5415] bump sbt to version to 0.13.7 |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-01-28 02:13:06 -0800 |
| Commit: 661d3f9, github.com/apache/spark/pull/4211 |
| |
| [SPARK-4809] Rework Guava library shading. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-01-28 00:29:29 -0800 |
| Commit: 37a5e27, github.com/apache/spark/pull/3658 |
| |
| [SPARK-5097][SQL] Test cases for DataFrame expressions. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-27 18:10:49 -0800 |
| Commit: d743732, github.com/apache/spark/pull/4235 |
| |
| [SPARK-5097][SQL] DataFrame |
| Reynold Xin <rxin@databricks.com>, Davies Liu <davies@databricks.com> |
| 2015-01-27 16:08:24 -0800 |
| Commit: 119f45d, github.com/apache/spark/pull/4173 |
| |
| SPARK-5199. FS read metrics should support CombineFileSplits and track bytes from all FSs |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-27 15:42:55 -0800 |
| Commit: b1b35ca, github.com/apache/spark/pull/4050 |
| |
| [MLlib] fix python example of ALS in guide |
| Davies Liu <davies@databricks.com> |
| 2015-01-27 15:33:01 -0800 |
| Commit: fdaad4e, github.com/apache/spark/pull/4226 |
| |
| SPARK-5308 [BUILD] MD5 / SHA1 hash format doesn't match standard Maven output |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-27 10:22:50 -0800 |
| Commit: ff356e2, github.com/apache/spark/pull/4161 |
| |
| [SPARK-5321] Support for transposing local matrices |
| Burak Yavuz <brkyvz@gmail.com> |
| 2015-01-27 01:46:17 -0800 |
| Commit: 9142674, github.com/apache/spark/pull/4109 |
| |
| [SPARK-5419][Mllib] Fix the logic in Vectors.sqdist |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-27 01:29:14 -0800 |
| Commit: 7b0ed79, github.com/apache/spark/pull/4217 |
| |
| [SPARK-3726] [MLlib] Allow sampling_rate not equal to 1.0 in RandomForests |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-01-26 19:46:17 -0800 |
| Commit: d6894b1, github.com/apache/spark/pull/4073 |
| |
| [SPARK-5119] java.lang.ArrayIndexOutOfBoundsException on trying to train... |
| lewuathe <lewuathe@me.com> |
| 2015-01-26 18:03:21 -0800 |
| Commit: f2ba5c6, github.com/apache/spark/pull/3975 |
| |
| [SPARK-5052] Add common/base classes to fix guava methods signatures. |
| Elmer Garduno <elmerg@google.com> |
| 2015-01-26 17:40:48 -0800 |
| Commit: 661e0fc, github.com/apache/spark/pull/3874 |
| |
| SPARK-960 [CORE] [TEST] JobCancellationSuite "two jobs sharing the same stage" is broken |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-26 14:32:27 -0800 |
| Commit: 0497ea5, github.com/apache/spark/pull/4180 |
| |
| Fix command spaces issue in make-distribution.sh |
| David Y. Ross <dyross@gmail.com> |
| 2015-01-26 14:26:10 -0800 |
| Commit: b38034e, github.com/apache/spark/pull/4126 |
| |
| SPARK-4147 [CORE] Reduce log4j dependency |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-26 14:23:42 -0800 |
| Commit: 54e7b45, github.com/apache/spark/pull/4190 |
| |
| [SPARK-5339][BUILD] build/mvn doesn't work because of invalid URL for maven's tgz. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-26 13:07:49 -0800 |
| Commit: c094c73, github.com/apache/spark/pull/4124 |
| |
| [SPARK-5355] use j.u.c.ConcurrentHashMap instead of TrieMap |
| Davies Liu <davies@databricks.com> |
| 2015-01-26 12:51:32 -0800 |
| Commit: 1420931, github.com/apache/spark/pull/4208 |
| |
| [SPARK-5384][mllib] Vectors.sqdist returns inconsistent results for sparse/dense vectors when the vectors have different lengths |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-01-25 22:18:09 -0800 |
| Commit: 8125168, github.com/apache/spark/pull/4183 |
| |
| [SPARK-5268] don't stop CoarseGrainedExecutorBackend for irrelevant DisassociatedEvent |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-01-25 19:28:53 -0800 |
| Commit: 8df9435, github.com/apache/spark/pull/4063 |
| |
| SPARK-4430 [STREAMING] [TEST] Apache RAT Checks fail spuriously on test files |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-25 19:16:44 -0800 |
| Commit: 0528b85, github.com/apache/spark/pull/4189 |
| |
| [SPARK-5326] Show fetch wait time as optional metric in the UI |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-01-25 16:48:26 -0800 |
| Commit: fc2168f, github.com/apache/spark/pull/4110 |
| |
| [SPARK-5344][WebUI] HistoryServer cannot recognize that inprogress file was renamed to completed file |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-25 15:34:20 -0800 |
| Commit: 8f5c827, github.com/apache/spark/pull/4132 |
| |
| SPARK-4506 [DOCS] Addendum: Update more docs to reflect that standalone works in cluster mode |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-25 15:25:05 -0800 |
| Commit: 9f64357, github.com/apache/spark/pull/4160 |
| |
| SPARK-5382: Use SPARK_CONF_DIR in spark-class if it is defined |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-01-25 15:15:09 -0800 |
| Commit: 1c30afd, github.com/apache/spark/pull/4179 |
| |
| SPARK-3782 [CORE] Direct use of log4j in AkkaUtils interferes with certain logging configurations |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-25 15:11:57 -0800 |
| Commit: 383425a, github.com/apache/spark/pull/4184 |
| |
| SPARK-3852 [DOCS] Document spark.driver.extra* configs |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-25 15:08:05 -0800 |
| Commit: c586b45, github.com/apache/spark/pull/4185 |
| |
| [SPARK-5402] log executor ID at executor-construction time |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-01-25 14:20:02 -0800 |
| Commit: aea2548, github.com/apache/spark/pull/4195 |
| |
| [SPARK-5401] set executor ID before creating MetricsSystem |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2015-01-25 14:17:59 -0800 |
| Commit: 2d9887b, github.com/apache/spark/pull/4194 |
| |
| Add comment about defaultMinPartitions |
| Idan Zalzberg <idanzalz@gmail.com> |
| 2015-01-25 11:28:05 -0800 |
| Commit: 412a58e, github.com/apache/spark/pull/4102 |
| |
| Closes #4157 |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-25 00:24:59 -0800 |
| Commit: d22ca1e |
| |
| [SPARK-5214][Test] Add a test to demonstrate EventLoop can be stopped in the event thread |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-24 11:00:35 -0800 |
| Commit: 0d1e67e, github.com/apache/spark/pull/4174 |
| |
| [SPARK-5058] Part 2. Typos and broken URL |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-01-23 23:34:11 -0800 |
| Commit: 09e09c5, github.com/apache/spark/pull/4172 |
| |
| [SPARK-5351][GraphX] Do not use Partitioner.defaultPartitioner as a partitioner of EdgeRDDImp... |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2015-01-23 19:25:15 -0800 |
| Commit: e224dbb, github.com/apache/spark/pull/4136 |
| |
| [SPARK-5063] More helpful error messages for several invalid operations |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-23 17:53:15 -0800 |
| Commit: cef1f09, github.com/apache/spark/pull/3884 |
| |
| [SPARK-3541][MLLIB] New ALS implementation with improved storage |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-22 22:09:13 -0800 |
| Commit: ea74365, github.com/apache/spark/pull/3720 |
| |
| [SPARK-5315][Streaming] Fix reduceByWindow Java API not work bug |
| jerryshao <saisai.shao@intel.com> |
| 2015-01-22 22:04:21 -0800 |
| Commit: e0f7fb7, github.com/apache/spark/pull/4104 |
| |
| [SPARK-5233][Streaming] Fix error replaying of WAL introduced bug |
| jerryshao <saisai.shao@intel.com> |
| 2015-01-22 21:58:53 -0800 |
| Commit: 3c3fa63, github.com/apache/spark/pull/4032 |
| |
| SPARK-5370. [YARN] Remove some unnecessary synchronization in YarnAlloca... |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-22 13:49:35 -0600 |
| Commit: 820ce03, github.com/apache/spark/pull/4164 |
| |
| [SPARK-5365][MLlib] Refactor KMeans to reduce redundant data |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-22 08:16:35 -0800 |
| Commit: 246111d, github.com/apache/spark/pull/4159 |
| |
| [SPARK-5147][Streaming] Delete the received data WAL log periodically |
| Tathagata Das <tathagata.das1565@gmail.com>, jerryshao <saisai.shao@intel.com> |
| 2015-01-21 23:41:44 -0800 |
| Commit: 3027f06, github.com/apache/spark/pull/4149 |
| |
| [SPARK-5317]Set BoostingStrategy.defaultParams With Enumeration Algo.Classification or Algo.Regression |
| Basin <jpsachilles@gmail.com> |
| 2015-01-21 23:06:34 -0800 |
| Commit: fcb3e18, github.com/apache/spark/pull/4103 |
| |
| [SPARK-3424][MLLIB] cache point distances during k-means|| init |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-21 21:20:31 -0800 |
| Commit: ca7910d, github.com/apache/spark/pull/4144 |
| |
| [SPARK-5202] [SQL] Add hql variable substitution support |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-01-21 17:34:18 -0800 |
| Commit: 27bccc5, github.com/apache/spark/pull/4003 |
| |
| [SPARK-5355] make SparkConf thread-safe |
| Davies Liu <davies@databricks.com> |
| 2015-01-21 16:51:42 -0800 |
| Commit: 9bad062, github.com/apache/spark/pull/4143 |
| |
| [SPARK-4984][CORE][WEBUI] Adding a pop-up containing the full job description when it is very long |
| wangfei <wangfei1@huawei.com> |
| 2015-01-21 15:27:42 -0800 |
| Commit: 3be2a88, github.com/apache/spark/pull/3819 |
| |
| [SQL] [Minor] Remove deprecated parquet tests |
| Cheng Lian <lian@databricks.com> |
| 2015-01-21 14:38:10 -0800 |
| Commit: ba19689, github.com/apache/spark/pull/4116 |
| |
| Revert "[SPARK-5244] [SQL] add coalesce() in sql parser" |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-21 14:27:43 -0800 |
| Commit: b328ac6 |
| |
| [SPARK-5009] [SQL] Long keyword support in SQL Parsers |
| Cheng Hao <hao.cheng@intel.com> |
| 2015-01-21 13:05:56 -0800 |
| Commit: 8361078, github.com/apache/spark/pull/3926 |
| |
| [SPARK-5244] [SQL] add coalesce() in sql parser |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-01-21 12:59:41 -0800 |
| Commit: 812d367, github.com/apache/spark/pull/4040 |
| |
| [SPARK-5064][GraphX] Add numEdges upperbound validation for R-MAT graph generator to prevent infinite loop |
| Kenji Kikushima <kikushima.kenji@lab.ntt.co.jp> |
| 2015-01-21 12:34:00 -0800 |
| Commit: 3ee3ab5, github.com/apache/spark/pull/3950 |
| |
| [SPARK-4749] [mllib]: Allow initializing KMeans clusters using a seed |
| nate.crosswhite <nate.crosswhite@stresearch.com>, nxwhite-str <nxwhite-str@users.noreply.github.com>, Xiangrui Meng <meng@databricks.com> |
| 2015-01-21 10:32:10 -0800 |
| Commit: 7450a99, github.com/apache/spark/pull/3610 |
| |
| [MLlib] [SPARK-5301] Missing conversions and operations on IndexedRowMatrix and CoordinateMatrix |
| Reza Zadeh <reza@databricks.com> |
| 2015-01-21 09:48:38 -0800 |
| Commit: aa1e22b, github.com/apache/spark/pull/4089 |
| |
| SPARK-1714. Take advantage of AMRMClient APIs to simplify logic in YarnA... |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-21 10:31:54 -0600 |
| Commit: 2eeada3, github.com/apache/spark/pull/3765 |
| |
| [SPARK-5336][YARN]spark.executor.cores must not be less than spark.task.cpus |
| WangTao <barneystinson@aliyun.com>, WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-21 09:42:30 -0600 |
| Commit: 8c06a5f, github.com/apache/spark/pull/4123 |
| |
| [SPARK-5297][Streaming] Fix Java file stream type erasure problem |
| jerryshao <saisai.shao@intel.com> |
| 2015-01-20 23:37:47 -0800 |
| Commit: 424d8c6, github.com/apache/spark/pull/4101 |
| |
| [HOTFIX] Update pom.xml to pull MapR's Hadoop version 2.4.1. |
| Kannan Rajah <rkannan82@gmail.com> |
| 2015-01-20 23:34:04 -0800 |
| Commit: ec5b0f2, github.com/apache/spark/pull/4108 |
| |
| [SPARK-5275] [Streaming] include python source code |
| Davies Liu <davies@databricks.com> |
| 2015-01-20 22:44:58 -0800 |
| Commit: bad6c57, github.com/apache/spark/pull/4128 |
| |
| [SPARK-5294][WebUI] Hide tables in AllStagePages for "Active Stages, Completed Stages and Failed Stages" when they are empty |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-20 16:40:46 -0800 |
| Commit: 9a151ce, github.com/apache/spark/pull/4083 |
| |
| [SPARK-5186] [MLLIB] Vector.equals and Vector.hashCode are very inefficient |
| Yuhao Yang <hhbyyh@gmail.com>, Yuhao Yang <yuhao@yuhaodevbox.sh.intel.com> |
| 2015-01-20 15:20:20 -0800 |
| Commit: 2f82c84, github.com/apache/spark/pull/3997 |
| |
| [SPARK-5323][SQL] Remove Row's Seq inheritance. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-20 15:16:14 -0800 |
| Commit: d181c2a, github.com/apache/spark/pull/4115 |
| |
| [SPARK-5287][SQL] Add defaultSizeOf to every data type. |
| Yin Huai <yhuai@databricks.com> |
| 2015-01-20 13:26:36 -0800 |
| Commit: bc20a52, github.com/apache/spark/pull/4081 |
| |
| SPARK-5019 [MLlib] - GaussianMixtureModel exposes instances of MultivariateGauss... |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-01-20 12:58:11 -0800 |
| Commit: 23e2554, github.com/apache/spark/pull/4088 |
| |
| [SPARK-5329][WebUI] UIWorkloadGenerator should stop SparkContext. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-20 12:40:55 -0800 |
| Commit: 769aced, github.com/apache/spark/pull/4112 |
| |
| SPARK-4660: Use correct class loader in JavaSerializer (copy of PR #3840... |
| Jacek Lewandowski <lewandowski.jacek@gmail.com> |
| 2015-01-20 12:38:01 -0800 |
| Commit: c93a57f, github.com/apache/spark/pull/4113 |
| |
| [SQL][Minor] Refactors deeply nested FP style code in BooleanSimplification |
| Cheng Lian <lian@databricks.com> |
| 2015-01-20 11:20:14 -0800 |
| Commit: 8140802, github.com/apache/spark/pull/4091 |
| |
| [SPARK-5333][Mesos] MesosTaskLaunchData occurs BufferUnderflowException |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-01-20 10:17:29 -0800 |
| Commit: 9d9294a, github.com/apache/spark/pull/4119 |
| |
| [SPARK-4803] [streaming] Remove duplicate RegisterReceiver message |
| Ilayaperumal Gopinathan <igopinathan@pivotal.io> |
| 2015-01-20 01:41:10 -0800 |
| Commit: 4afad9c, github.com/apache/spark/pull/3648 |
| |
| [SQL][minor] Add a log4j file for catalyst test. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-20 00:55:25 -0800 |
| Commit: debc031, github.com/apache/spark/pull/4117 |
| |
| SPARK-5270 [CORE] Provide isEmpty() function in RDD API |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-19 22:50:44 -0800 |
| Commit: 306ff18, github.com/apache/spark/pull/4074 |
| |
| [SPARK-5214][Core] Add EventLoop and change DAGScheduler to an EventLoop |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-19 18:15:51 -0800 |
| Commit: e69fb8c, github.com/apache/spark/pull/4016 |
| |
| [SPARK-4504][Examples] fix run-example failure if multiple assembly jars exist |
| Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2015-01-19 11:58:16 -0800 |
| Commit: 74de94e, github.com/apache/spark/pull/3377 |
| |
| [SPARK-5286][SQL] Fail to drop an invalid table when using the data source API |
| Yin Huai <yhuai@databricks.com> |
| 2015-01-19 10:45:29 -0800 |
| Commit: 2604bc3, github.com/apache/spark/pull/4076 |
| |
| [SPARK-5284][SQL] Insert into Hive throws NPE when a inner complex type field has a null value |
| Yin Huai <yhuai@databricks.com> |
| 2015-01-19 10:44:12 -0800 |
| Commit: cd5da42, github.com/apache/spark/pull/4077 |
| |
| [SPARK-5282][mllib]: RowMatrix easily gets int overflow in the memory size warning |
| Yuhao Yang <hhbyyh@gmail.com> |
| 2015-01-19 10:10:15 -0800 |
| Commit: 4432568, github.com/apache/spark/pull/4069 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-19 02:05:24 -0800 |
| Commit: 1ac1c1d, github.com/apache/spark/pull/3584 |
| |
| [SPARK-5088] Use spark-class for running executors directly |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-01-19 02:01:56 -0800 |
| Commit: 4a4f9cc, github.com/apache/spark/pull/3897 |
| |
| [SPARK-3288] All fields in TaskMetrics should be private and use getters/setters |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-01-19 01:32:22 -0800 |
| Commit: 3453d57, github.com/apache/spark/pull/4020 |
| |
| SPARK-5217 Spark UI should report pending stages during job execution on AllStagesPage. |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2015-01-19 01:28:42 -0800 |
| Commit: 851b6a9, github.com/apache/spark/pull/4043 |
| |
| [SQL] fix typo in class description |
| Jacky Li <jacky.likun@gmail.com> |
| 2015-01-18 23:59:08 -0800 |
| Commit: 7dbf1fd, github.com/apache/spark/pull/4100 |
| |
| [SQL][minor] Put DataTypes.java in java dir. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-18 16:35:40 -0800 |
| Commit: 1955645, github.com/apache/spark/pull/4097 |
| |
| [SQL][Minor] Update sql doc according to data type APIs changes |
| scwf <wangfei1@huawei.com> |
| 2015-01-18 11:03:13 -0800 |
| Commit: 1a200a3, github.com/apache/spark/pull/4095 |
| |
| [SPARK-5279][SQL] Use java.math.BigDecimal as the exposed Decimal type. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-18 11:01:42 -0800 |
| Commit: 1727e08, github.com/apache/spark/pull/4092 |
| |
| [HOTFIX]: Minor clean up regarding skipped artifacts in build files. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-17 23:15:12 -0800 |
| Commit: ad16da1, github.com/apache/spark/pull/4080 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <patrick@databricks.com> |
| 2015-01-17 20:39:54 -0800 |
| Commit: e12b5b6, github.com/apache/spark/pull/681 |
| |
| [SQL][Minor] Added comments and examples to explain BooleanSimplification |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-17 17:35:53 -0800 |
| Commit: e7884bc, github.com/apache/spark/pull/4090 |
| |
| [SPARK-5096] Use sbt tasks instead of vals to get hadoop version |
| Michael Armbrust <michael@databricks.com> |
| 2015-01-17 17:03:07 -0800 |
| Commit: 6999910, github.com/apache/spark/pull/3905 |
| |
| [SPARK-4937][SQL] Comment for the newly optimization rules in `BooleanSimplification` |
| scwf <wangfei1@huawei.com> |
| 2015-01-17 15:51:24 -0800 |
| Commit: c1f3c27, github.com/apache/spark/pull/4086 |
| |
| [SQL][minor] Improved Row documentation. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-17 00:11:08 -0800 |
| Commit: f3bfc76, github.com/apache/spark/pull/4085 |
| |
| [SPARK-5193][SQL] Remove Spark SQL Java-specific API. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-16 21:09:06 -0800 |
| Commit: 61b427d, github.com/apache/spark/pull/4065 |
| |
| [SPARK-4937][SQL] Adding optimization to simplify the And, Or condition in spark sql |
| scwf <wangfei1@huawei.com>, wangfei <wangfei1@huawei.com> |
| 2015-01-16 14:01:22 -0800 |
| Commit: ee1c1f3, github.com/apache/spark/pull/3778 |
| |
| [SPARK-733] Add documentation on use of accumulators in lazy transformation |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2015-01-16 13:25:17 -0800 |
| Commit: fd3a8a1, github.com/apache/spark/pull/4022 |
| |
| [SPARK-4923][REPL] Add Developer API to REPL to allow re-publishing the REPL jar |
| Chip Senkbeil <rcsenkbe@us.ibm.com>, Chip Senkbeil <chip.senkbeil@gmail.com> |
| 2015-01-16 12:56:40 -0800 |
| Commit: d05c9ee, github.com/apache/spark/pull/4034 |
| |
| [WebUI] Fix collapse of WebUI layout |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-16 12:19:08 -0800 |
| Commit: ecf943d, github.com/apache/spark/pull/3995 |
| |
| [SPARK-5231][WebUI] History Server shows wrong job submission time. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-16 10:05:11 -0800 |
| Commit: e8422c5, github.com/apache/spark/pull/4029 |
| |
| [DOCS] Fix typo in return type of cogroup |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-16 09:28:44 -0800 |
| Commit: f6b852a, github.com/apache/spark/pull/4072 |
| |
| [SPARK-5201][CORE] deal with int overflow in the ParallelCollectionRDD.slice method |
| Ye Xianjin <advancedxy@gmail.com> |
| 2015-01-16 09:20:53 -0800 |
| Commit: e200ac8, github.com/apache/spark/pull/4002 |
| |
| [SPARK-1507][YARN]specify # cores for ApplicationMaster |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTao <barneystinson@aliyun.com> |
| 2015-01-16 09:16:56 -0800 |
| Commit: 2be82b1, github.com/apache/spark/pull/4018 |
| |
| [SPARK-4092] [CORE] Fix InputMetrics for coalesce'd Rdds |
| Kostas Sakellis <kostas@cloudera.com> |
| 2015-01-15 18:48:39 -0800 |
| Commit: a79a9f9, github.com/apache/spark/pull/3120 |
| |
| [SPARK-4857] [CORE] Adds Executor membership events to SparkListener |
| Kostas Sakellis <kostas@cloudera.com> |
| 2015-01-15 17:53:42 -0800 |
| Commit: 96c2c71, github.com/apache/spark/pull/3711 |
| |
| [Minor] Fix tiny typo in BlockManager |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-15 17:07:44 -0800 |
| Commit: 65858ba, github.com/apache/spark/pull/4046 |
| |
| [SPARK-5274][SQL] Reconcile Java and Scala UDFRegistration. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-15 16:15:12 -0800 |
| Commit: 1881431, github.com/apache/spark/pull/4056 |
| |
| [SPARK-5224] [PySpark] improve performance of parallelize list/ndarray |
| Davies Liu <davies@databricks.com> |
| 2015-01-15 11:40:41 -0800 |
| Commit: 3c8650c, github.com/apache/spark/pull/4024 |
| |
| [SPARK-5193][SQL] Tighten up HiveContext API |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-14 20:31:02 -0800 |
| Commit: 4b325c7, github.com/apache/spark/pull/4054 |
| |
| [SPARK-5254][MLLIB] remove developers section from spark.ml guide |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-14 18:54:17 -0800 |
| Commit: 6abc45e, github.com/apache/spark/pull/4053 |
| |
| [SPARK-5193][SQL] Tighten up SQLContext API |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-14 18:36:15 -0800 |
| Commit: cfa397c, github.com/apache/spark/pull/4049 |
| |
| [SPARK-5254][MLLIB] Update the user guide to position spark.ml better |
| Xiangrui Meng <meng@databricks.com> |
| 2015-01-14 17:50:33 -0800 |
| Commit: 13d2406, github.com/apache/spark/pull/4052 |
| |
| [SPARK-5234][ml]examples for ml don't have sparkContext.stop |
| Yuhao Yang <yuhao@yuhaodevbox.sh.intel.com> |
| 2015-01-14 11:53:43 -0800 |
| Commit: 76389c5, github.com/apache/spark/pull/4044 |
| |
| [SPARK-5235] Make SQLConf Serializable |
| Alex Baretta <alexbaretta@gmail.com> |
| 2015-01-14 11:51:55 -0800 |
| Commit: 2fd7f72, github.com/apache/spark/pull/4031 |
| |
| [SPARK-4014] Add TaskContext.attemptNumber and deprecate TaskContext.attemptId |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-14 11:45:40 -0800 |
| Commit: 259936b, github.com/apache/spark/pull/3849 |
| |
| [SPARK-5228][WebUI] Hide tables for "Active Jobs/Completed Jobs/Failed Jobs" when they are empty |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-14 11:10:29 -0800 |
| Commit: 9d4449c, github.com/apache/spark/pull/4028 |
| |
| [SPARK-2909] [MLlib] [PySpark] SparseVector in pyspark now supports indexing |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-01-14 11:03:11 -0800 |
| Commit: 5840f54, github.com/apache/spark/pull/4025 |
| |
| [SQL] some comments fix for GROUPING SETS |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-01-14 09:50:01 -0800 |
| Commit: 38bdc99, github.com/apache/spark/pull/4000 |
| |
| [SPARK-5211][SQL]Restore HiveMetastoreTypes.toDataType |
| Yin Huai <yhuai@databricks.com> |
| 2015-01-14 09:47:30 -0800 |
| Commit: 81f72a0, github.com/apache/spark/pull/4026 |
| |
| [SPARK-5248] [SQL] move sql.types.decimal.Decimal to sql.types.Decimal |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2015-01-14 09:36:59 -0800 |
| Commit: a3f7421, github.com/apache/spark/pull/4041 |
| |
| [SPARK-5167][SQL] Move Row into sql package and make it usable for Java. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-14 00:38:55 -0800 |
| Commit: d5eeb35, github.com/apache/spark/pull/4030 |
| |
| [SPARK-5123][SQL] Reconcile Java/Scala API for data types. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-13 17:16:41 -0800 |
| Commit: f996909, github.com/apache/spark/pull/3958 |
| |
| [SPARK-5168] Make SQLConf a field rather than mixin in SQLContext |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-13 13:30:35 -0800 |
| Commit: 14e3f11, github.com/apache/spark/pull/3965 |
| |
| [SPARK-4912][SQL] Persistent tables for the Spark SQL data sources api |
| Yin Huai <yhuai@databricks.com>, Michael Armbrust <michael@databricks.com> |
| 2015-01-13 13:01:27 -0800 |
| Commit: 6463e0b, github.com/apache/spark/pull/3960 |
| |
| [SPARK-5223] [MLlib] [PySpark] fix MapConverter and ListConverter in MLlib |
| Davies Liu <davies@databricks.com> |
| 2015-01-13 12:50:31 -0800 |
| Commit: 8ead999, github.com/apache/spark/pull/4023 |
| |
| [SPARK-5131][Streaming][DOC]: There is a discrepancy in WAL implementation and configuration doc. |
| uncleGen <hustyugm@gmail.com> |
| 2015-01-13 10:07:19 -0800 |
| Commit: 39e333e, github.com/apache/spark/pull/3930 |
| |
| [SPARK-4697][YARN]System properties should override environment variables |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTao <barneystinson@aliyun.com> |
| 2015-01-13 09:43:48 -0800 |
| Commit: 9dea64e, github.com/apache/spark/pull/3557 |
| |
| [SPARK-5006][Deploy]spark.port.maxRetries doesn't work |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTao <barneystinson@aliyun.com> |
| 2015-01-13 09:28:21 -0800 |
| Commit: f7741a9, github.com/apache/spark/pull/3841 |
| |
| [SPARK-5138][SQL] Ensure schema can be inferred from a namedtuple |
| Gabe Mulley <gabe@edx.org> |
| 2015-01-12 21:44:51 -0800 |
| Commit: 1e42e96, github.com/apache/spark/pull/3978 |
| |
| [SPARK-5049][SQL] Fix ordering of partition columns in ParquetTableScan |
| Michael Armbrust <michael@databricks.com> |
| 2015-01-12 15:19:09 -0800 |
| Commit: 5d9fa55, github.com/apache/spark/pull/3990 |
| |
| [SPARK-4999][Streaming] Change storeInBlockManager to false by default |
| jerryshao <saisai.shao@intel.com> |
| 2015-01-12 13:14:44 -0800 |
| Commit: 3aed305, github.com/apache/spark/pull/3906 |
| |
| SPARK-5172 [BUILD] spark-examples-***.jar shades a wrong Hadoop distribution |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-12 12:15:34 -0800 |
| Commit: aff49a3, github.com/apache/spark/pull/3992 |
| |
| [SPARK-5078] Optionally read from SPARK_LOCAL_HOSTNAME |
| Michael Armbrust <michael@databricks.com> |
| 2015-01-12 11:57:59 -0800 |
| Commit: a3978f3, github.com/apache/spark/pull/3893 |
| |
| SPARK-4159 [BUILD] Addendum: improve running of single test after enabling Java tests |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-12 11:00:56 -0800 |
| Commit: 13e610b, github.com/apache/spark/pull/3993 |
| |
| [SPARK-5102][Core]subclass of MapStatus needs to be registered with Kryo |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2015-01-12 10:57:12 -0800 |
| Commit: ef9224e, github.com/apache/spark/pull/4007 |
| |
| [SPARK-5200] Disable web UI in Hive ThriftServer tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-12 10:47:12 -0800 |
| Commit: 82fd38d, github.com/apache/spark/pull/3998 |
| |
| SPARK-5018 [MLlib] [WIP] Make MultivariateGaussian public |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-01-11 21:31:16 -0800 |
| Commit: 2130de9, github.com/apache/spark/pull/3923 |
| |
| [SPARK-4033][Examples]Input of the SparkPi too big causes the emption exception |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-01-11 16:32:47 -0800 |
| Commit: f38ef65, github.com/apache/spark/pull/2874 |
| |
| [SPARK-4951][Core] Fix the issue that a busy executor may be killed |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-11 16:23:28 -0800 |
| Commit: 6942b97, github.com/apache/spark/pull/3783 |
| |
| [SPARK-5073] spark.storage.memoryMapThreshold have two default value |
| lewuathe <lewuathe@me.com> |
| 2015-01-11 13:50:42 -0800 |
| Commit: 1656aae, github.com/apache/spark/pull/3900 |
| |
| [SPARK-5032] [graphx] Remove GraphX MIMA exclude for 1.3 |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-01-10 17:25:39 -0800 |
| Commit: 3313260, github.com/apache/spark/pull/3856 |
| |
| [SPARK-5029][SQL] Enable from follow multiple brackets |
| scwf <wangfei1@huawei.com> |
| 2015-01-10 17:07:34 -0800 |
| Commit: d22a31f, github.com/apache/spark/pull/3853 |
| |
| [SPARK-4871][SQL] Show sql statement in spark ui when run sql with spark-sql |
| wangfei <wangfei1@huawei.com> |
| 2015-01-10 17:04:56 -0800 |
| Commit: 92d9a70, github.com/apache/spark/pull/3718 |
| |
| [Minor]Resolve sbt warnings during build (MQTTStreamSuite.scala). |
| GuoQiang Li <witgo@qq.com> |
| 2015-01-10 15:38:43 -0800 |
| Commit: 8a29dc7, github.com/apache/spark/pull/3989 |
| |
| [SPARK-5181] do not print writing WAL log when WAL is disabled |
| CodingCat <zhunansjtu@gmail.com> |
| 2015-01-10 15:35:41 -0800 |
| Commit: f0d558b, github.com/apache/spark/pull/3985 |
| |
| [SPARK-4692] [SQL] Support ! boolean logic operator like NOT |
| YanTangZhai <hakeemzhai@tencent.com>, Michael Armbrust <michael@databricks.com> |
| 2015-01-10 15:05:23 -0800 |
| Commit: 0ca51cc, github.com/apache/spark/pull/3555 |
| |
| [SPARK-5187][SQL] Fix caching of tables with HiveUDFs in the WHERE clause |
| Michael Armbrust <michael@databricks.com> |
| 2015-01-10 14:25:45 -0800 |
| Commit: 3684fd2, github.com/apache/spark/pull/3987 |
| |
| SPARK-4963 [SQL] Add copy to SQL's Sample operator |
| Yanbo Liang <yanbohappy@gmail.com> |
| 2015-01-10 14:16:37 -0800 |
| Commit: 77106df, github.com/apache/spark/pull/3827 |
| |
| [SPARK-4861][SQL] Refactory command in spark sql |
| scwf <wangfei1@huawei.com> |
| 2015-01-10 14:08:04 -0800 |
| Commit: b3e86dc, github.com/apache/spark/pull/3948 |
| |
| [SPARK-4574][SQL] Adding support for defining schema in foreign DDL commands. |
| scwf <wangfei1@huawei.com>, Yin Huai <yhuai@databricks.com>, Fei Wang <wangfei1@huawei.com>, wangfei <wangfei1@huawei.com> |
| 2015-01-10 13:53:21 -0800 |
| Commit: 693a323, github.com/apache/spark/pull/3431 |
| |
| [SPARK-4943][SQL] Allow table name having dot for db/catalog |
| Alex Liu <alex_liu68@yahoo.com> |
| 2015-01-10 13:23:09 -0800 |
| Commit: 4b39fd1, github.com/apache/spark/pull/3941 |
| |
| [SPARK-4925][SQL] Publish Spark SQL hive-thriftserver maven artifact |
| Alex Liu <alex_liu68@yahoo.com> |
| 2015-01-10 13:19:12 -0800 |
| Commit: 1e56eba, github.com/apache/spark/pull/3766 |
| |
| [SPARK-5141][SQL]CaseInsensitiveMap throws java.io.NotSerializableException |
| luogankun <luogankun@gmail.com> |
| 2015-01-09 20:38:41 -0800 |
| Commit: 545dfcb, github.com/apache/spark/pull/3944 |
| |
| [SPARK-4406] [MLib] FIX: Validate k in SVD |
| MechCoder <manojkumarsivaraj334@gmail.com> |
| 2015-01-09 17:45:18 -0800 |
| Commit: 4554529, github.com/apache/spark/pull/3945 |
| |
| [SPARK-4990][Deploy]to find default properties file, search SPARK_CONF_DIR first |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTao <barneystinson@aliyun.com> |
| 2015-01-09 17:10:02 -0800 |
| Commit: 8782eb9, github.com/apache/spark/pull/3823 |
| |
| [Minor] Fix import order and other coding style |
| bilna <bilnap@am.amrita.edu>, Bilna P <bilna.p@gmail.com> |
| 2015-01-09 14:45:28 -0800 |
| Commit: 4e1f12d, github.com/apache/spark/pull/3966 |
| |
| [DOC] Fixed Mesos version in doc from 0.18.1 to 0.21.0 |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-09 14:40:45 -0800 |
| Commit: ae62872, github.com/apache/spark/pull/3982 |
| |
| [SPARK-4737] Task set manager properly handles serialization errors |
| mcheah <mcheah@palantir.com> |
| 2015-01-09 14:16:20 -0800 |
| Commit: e0f28e0, github.com/apache/spark/pull/3638 |
| |
| [SPARK-1953][YARN]yarn client mode Application Master memory size is same as driver memory... |
| WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-09 13:20:32 -0800 |
| Commit: e966452, github.com/apache/spark/pull/3607 |
| |
| [SPARK-5015] [mllib] Random seed for GMM + make test suite deterministic |
| Joseph K. Bradley <joseph@databricks.com> |
| 2015-01-09 13:00:15 -0800 |
| Commit: 7e8e62a, github.com/apache/spark/pull/3981 |
| |
| [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-01-09 10:47:08 -0800 |
| Commit: 454fe12, github.com/apache/spark/pull/3934 |
| |
| [SPARK-5145][Mllib] Add BLAS.dsyr and use it in GaussianMixtureEM |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-09 10:27:33 -0800 |
| Commit: e9ca16e, github.com/apache/spark/pull/3949 |
| |
| [SPARK-1143] Separate pool tests into their own suite. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2015-01-09 09:47:06 -0800 |
| Commit: b6aa557, github.com/apache/spark/pull/3967 |
| |
| HOTFIX: Minor improvements to make-distribution.sh |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-01-09 09:40:18 -0800 |
| Commit: 1790b38, github.com/apache/spark/pull/3973 |
| |
| SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-09 09:35:46 -0800 |
| Commit: 547df97, github.com/apache/spark/pull/3952 |
| |
| [Minor] Fix test RetryingBlockFetcherSuite after changed config name |
| Aaron Davidson <aaron@databricks.com> |
| 2015-01-09 09:20:16 -0800 |
| Commit: b4034c3, github.com/apache/spark/pull/3972 |
| |
| [SPARK-5169][YARN]fetch the correct max attempts |
| WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-09 08:10:09 -0600 |
| Commit: f3da4bd, github.com/apache/spark/pull/3942 |
| |
| [SPARK-5122] Remove Shark from spark-ec2 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2015-01-08 17:42:08 -0800 |
| Commit: 167a5ab, github.com/apache/spark/pull/3939 |
| |
| [SPARK-4048] Enhance and extend hadoop-provided profile. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2015-01-08 17:15:13 -0800 |
| Commit: 48cecf6, github.com/apache/spark/pull/2982 |
| |
| [SPARK-4891][PySpark][MLlib] Add gamma/log normal/exp dist sampling to P... |
| RJ Nowling <rnowling@gmail.com> |
| 2015-01-08 15:03:43 -0800 |
| Commit: c9c8b21, github.com/apache/spark/pull/3955 |
| |
| [SPARK-4973][CORE] Local directory in the driver of client-mode continues remaining even if application finished when external shuffle is enabled |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-08 13:43:09 -0800 |
| Commit: a00af6b, github.com/apache/spark/pull/3811 |
| |
| SPARK-5148 [MLlib] Make usersOut/productsOut storagelevel in ALS configurable |
| Fernando Otero (ZeoS) <fotero@gmail.com> |
| 2015-01-08 12:42:54 -0800 |
| Commit: 72df5a3, github.com/apache/spark/pull/3953 |
| |
| Document that groupByKey will OOM for large keys |
| Eric Moyer <eric_moyer@yahoo.com> |
| 2015-01-08 11:55:23 -0800 |
| Commit: 538f221, github.com/apache/spark/pull/3936 |
| |
| [SPARK-5130][Deploy]Take yarn-cluster as cluster mode in spark-submit |
| WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-08 11:45:42 -0800 |
| Commit: 0760787, github.com/apache/spark/pull/3929 |
| |
| [Minor] Fix the value represented by spark.executor.id for consistency. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2015-01-08 11:35:56 -0800 |
| Commit: 0a59727, github.com/apache/spark/pull/3812 |
| |
| [SPARK-4989][CORE] avoid wrong eventlog conf cause cluster down in standalone mode |
| Zhang, Liye <liye.zhang@intel.com> |
| 2015-01-08 10:40:26 -0800 |
| Commit: 06dc4b5, github.com/apache/spark/pull/3824 |
| |
| [SPARK-4917] Add a function to convert into a graph with canonical edges in GraphOps |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2015-01-08 09:55:12 -0800 |
| Commit: f825e19, github.com/apache/spark/pull/3760 |
| |
| SPARK-5087. [YARN] Merge yarn.Client and yarn.ClientBase |
| Sandy Ryza <sandy@cloudera.com> |
| 2015-01-08 09:25:43 -0800 |
| Commit: 8d45834, github.com/apache/spark/pull/3896 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2015-01-07 23:25:56 -0800 |
| Commit: c082385, github.com/apache/spark/pull/3880 |
| |
| [SPARK-5116][MLlib] Add extractor for SparseVector and DenseVector |
| Shuo Xiang <shuoxiangpub@gmail.com> |
| 2015-01-07 23:22:37 -0800 |
| Commit: c66a976, github.com/apache/spark/pull/3919 |
| |
| [SPARK-5126][Core] Verify Spark urls before creating Actors so that invalid urls can crash the process. |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-07 23:01:30 -0800 |
| Commit: 2b729d2, github.com/apache/spark/pull/3927 |
| |
| [SPARK-5132][Core]Correct stage Attempt Id key in stageInfofromJson |
| hushan[č”ē] <hushan@xiaomi.com> |
| 2015-01-07 12:09:12 -0800 |
| Commit: d345ebe, github.com/apache/spark/pull/3932 |
| |
| [SPARK-5128][MLLib] Add common used log1pExp API in MLUtils |
| DB Tsai <dbtsai@alpinenow.com> |
| 2015-01-07 10:13:41 -0800 |
| Commit: 60e2d9e, github.com/apache/spark/pull/3915 |
| |
| [SPARK-2458] Make failed application log visible on History Server |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2015-01-07 07:32:16 -0800 |
| Commit: 6e74ede, github.com/apache/spark/pull/3467 |
| |
| [SPARK-2165][YARN]add support for setting maxAppAttempts in the ApplicationSubmissionContext |
| WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-07 08:14:39 -0600 |
| Commit: 8fdd489, github.com/apache/spark/pull/3878 |
| |
| [YARN][SPARK-4929] Bug fix: fix the yarn-client code to support HA |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2015-01-07 08:10:42 -0600 |
| Commit: 5fde661, github.com/apache/spark/pull/3771 |
| |
| [SPARK-5099][Mllib] Simplify logistic loss function |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-06 21:23:31 -0800 |
| Commit: e21acc1, github.com/apache/spark/pull/3899 |
| |
| [SPARK-5050][Mllib] Add unit test for sqdist |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2015-01-06 14:00:45 -0800 |
| Commit: bb38ebb, github.com/apache/spark/pull/3869 |
| |
| SPARK-5017 [MLlib] - Use SVD to compute determinant and inverse of covariance matrix |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2015-01-06 13:57:42 -0800 |
| Commit: 4108e5f, github.com/apache/spark/pull/3871 |
| |
| SPARK-4159 [CORE] Maven build doesn't run JUnit test suites |
| Sean Owen <sowen@cloudera.com> |
| 2015-01-06 12:02:08 -0800 |
| Commit: 4cba6eb, github.com/apache/spark/pull/3651 |
| |
| [Minor] Fix comments for GraphX 2D partitioning strategy |
| kj-ki <kikushima.kenji@lab.ntt.co.jp> |
| 2015-01-06 09:49:37 -0800 |
| Commit: 5e3ec11, github.com/apache/spark/pull/3904 |
| |
| [SPARK-1600] Refactor FileInputStream tests to remove Thread.sleep() calls and SystemClock usage |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-06 00:31:19 -0800 |
| Commit: a6394bc, github.com/apache/spark/pull/3801 |
| |
| SPARK-4843 [YARN] Squash ExecutorRunnableUtil and ExecutorRunnable |
| Kostas Sakellis <kostas@cloudera.com> |
| 2015-01-05 23:26:33 -0800 |
| Commit: 451546a, github.com/apache/spark/pull/3696 |
| |
| [SPARK-5040][SQL] Support expressing unresolved attributes using $"attribute name" notation in SQL DSL. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-05 15:34:22 -0800 |
| Commit: 04d55d8, github.com/apache/spark/pull/3862 |
| |
| [SPARK-5093] Set spark.network.timeout to 120s consistently. |
| Reynold Xin <rxin@databricks.com> |
| 2015-01-05 15:19:53 -0800 |
| Commit: bbcba3a, github.com/apache/spark/pull/3903 |
| |
| [SPARK-5089][PYSPARK][MLLIB] Fix vector convert |
| freeman <the.freeman.lab@gmail.com> |
| 2015-01-05 13:10:59 -0800 |
| Commit: 6c6f325, github.com/apache/spark/pull/3902 |
| |
| [SPARK-4465] runAsSparkUser doesn't affect TaskRunner in Mesos environme... |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2015-01-05 12:05:09 -0800 |
| Commit: 1c0e7ce, github.com/apache/spark/pull/3741 |
| |
| [SPARK-5057] Log message in failed askWithReply attempts |
| WangTao <barneystinson@aliyun.com>, WangTaoTheTonic <barneystinson@aliyun.com> |
| 2015-01-05 11:59:38 -0800 |
| Commit: ce39b34, github.com/apache/spark/pull/3875 |
| |
| [SPARK-4688] Have a single shared network timeout in Spark |
| Varun Saxena <vsaxena.varun@gmail.com>, varunsaxena <vsaxena.varun@gmail.com> |
| 2015-01-05 10:32:37 -0800 |
| Commit: d3f07fd, github.com/apache/spark/pull/3562 |
| |
| [SPARK-5074][Core] Fix a non-deterministic test failure |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-04 21:18:33 -0800 |
| Commit: 5c506ce, github.com/apache/spark/pull/3889 |
| |
| [SPARK-5083][Core] Fix a flaky test in TaskResultGetterSuite |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-04 21:09:21 -0800 |
| Commit: 27e7f5a, github.com/apache/spark/pull/3894 |
| |
| [SPARK-5069][Core] Fix the race condition of TaskSchedulerImpl.dagScheduler |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-04 21:06:04 -0800 |
| Commit: 6c726a3, github.com/apache/spark/pull/3887 |
| |
| [SPARK-5067][Core] Use '===' to compare well-defined case class |
| zsxwing <zsxwing@gmail.com> |
| 2015-01-04 21:03:17 -0800 |
| Commit: 7239652, github.com/apache/spark/pull/3886 |
| |
| [SPARK-4835] Disable validateOutputSpecs for Spark Streaming jobs |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-04 20:26:18 -0800 |
| Commit: 939ba1f, github.com/apache/spark/pull/3832 |
| |
| [SPARK-4631] unit test for MQTT |
| bilna <bilnap@am.amrita.edu>, Bilna P <bilna.p@gmail.com> |
| 2015-01-04 19:37:48 -0800 |
| Commit: e767d7d, github.com/apache/spark/pull/3844 |
| |
| [SPARK-4787] Stop SparkContext if a DAGScheduler init error occurs |
| Dale <tigerquoll@outlook.com> |
| 2015-01-04 13:28:37 -0800 |
| Commit: 3fddc94, github.com/apache/spark/pull/3809 |
| |
| [SPARK-794][Core] Remove sleep() in ClusterScheduler.stop |
| Brennon York <brennon.york@capitalone.com> |
| 2015-01-04 12:40:39 -0800 |
| Commit: b96008d, github.com/apache/spark/pull/3851 |
| |
| [SPARK-5058] Updated broken links |
| sigmoidanalytics <mayur@sigmoidanalytics.com> |
| 2015-01-03 19:46:08 -0800 |
| Commit: 342612b, github.com/apache/spark/pull/3877 |
| |
| Fixed typos in streaming-kafka-integration.md |
| Akhil Das <akhld@darktech.ca> |
| 2015-01-02 15:12:27 -0800 |
| Commit: cdccc26, github.com/apache/spark/pull/3876 |
| |
| [SPARK-3325][Streaming] Add a parameter to the method print in class DStream |
| Yadong Qi <qiyadong2010@gmail.com>, q00251598 <qiyadong@huawei.com>, Tathagata Das <tathagata.das1565@gmail.com>, wangfei <wangfei1@huawei.com> |
| 2015-01-02 15:09:41 -0800 |
| Commit: bd88b71, github.com/apache/spark/pull/3865 |
| |
| [HOTFIX] Bind web UI to ephemeral port in DriverSuite |
| Josh Rosen <joshrosen@databricks.com> |
| 2015-01-01 15:03:54 -0800 |
| Commit: 0128398, github.com/apache/spark/pull/3873 |
| |
| [SPARK-5038] Add explicit return type for implicit functions. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-31 17:07:47 -0800 |
| Commit: 7749dd6, github.com/apache/spark/pull/3860 |
| |
| SPARK-2757 [BUILD] [STREAMING] Add Mima test for Spark Sink after 1.10 is released |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-31 16:59:17 -0800 |
| Commit: 4bb1248, github.com/apache/spark/pull/3842 |
| |
| [SPARK-5035] [Streaming] ReceiverMessage trait should extend Serializable |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-31 16:02:47 -0800 |
| Commit: fe6efac, github.com/apache/spark/pull/3857 |
| |
| SPARK-5020 [MLlib] GaussianMixtureModel.predictMembership() should take an RDD only |
| Travis Galoppo <tjg2107@columbia.edu> |
| 2014-12-31 15:39:58 -0800 |
| Commit: c4f0b4f, github.com/apache/spark/pull/3854 |
| |
| [SPARK-5028][Streaming]Add total received and processed records metrics to Streaming UI |
| jerryshao <saisai.shao@intel.com> |
| 2014-12-31 14:45:31 -0800 |
| Commit: fdc2aa4, github.com/apache/spark/pull/3852 |
| |
| [SPARK-4790][STREAMING] Fix ReceivedBlockTrackerSuite waits for old file... |
| Hari Shreedharan <hshreedharan@apache.org> |
| 2014-12-31 14:35:07 -0800 |
| Commit: 3610d3c, github.com/apache/spark/pull/3726 |
| |
| [SPARK-5038][SQL] Add explicit return type for implicit functions in Spark SQL |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-31 14:25:03 -0800 |
| Commit: c88a3d7, github.com/apache/spark/pull/3859 |
| |
| [HOTFIX] Disable Spark UI in SparkSubmitSuite tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-12 12:38:37 -0800 |
| Commit: e24d3a9 |
| |
| SPARK-4547 [MLLIB] OOM when making bins in BinaryClassificationMetrics |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-31 13:37:04 -0800 |
| Commit: 3d194cc, github.com/apache/spark/pull/3702 |
| |
| [SPARK-4298][Core] - The spark-submit cannot read Main-Class from Manifest. |
| Brennon York <brennon.york@capitalone.com> |
| 2014-12-31 11:54:10 -0800 |
| Commit: 8e14c5e, github.com/apache/spark/pull/3561 |
| |
| [SPARK-4797] Replace breezeSquaredDistance |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-12-31 11:50:53 -0800 |
| Commit: 06a9aa5, github.com/apache/spark/pull/3643 |
| |
| [SPARK-1010] Clean up uses of System.setProperty in unit tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-30 18:12:20 -0800 |
| Commit: 352ed6b, github.com/apache/spark/pull/3739 |
| |
| [SPARK-4998][MLlib]delete the "train" function |
| Liu Jiongzhou <ljzzju@163.com> |
| 2014-12-30 15:55:56 -0800 |
| Commit: 035bac8, github.com/apache/spark/pull/3836 |
| |
| [SPARK-4813][Streaming] Fix the issue that ContextWaiter didn't handle 'spurious wakeup' |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-30 14:39:13 -0800 |
| Commit: 6a89782, github.com/apache/spark/pull/3661 |
| |
| [Spark-4995] Replace Vector.toBreeze.activeIterator with foreachActive |
| Jakub Dubovsky <dubovsky@avast.com> |
| 2014-12-30 14:19:07 -0800 |
| Commit: 0f31992, github.com/apache/spark/pull/3846 |
| |
| SPARK-3955 part 2 [CORE] [HOTFIX] Different versions between jackson-mapper-asl and jackson-core-asl |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-30 14:00:57 -0800 |
| Commit: b239ea1, github.com/apache/spark/pull/3829 |
| |
| [SPARK-4570][SQL]add BroadcastLeftSemiJoinHash |
| wangxiaojing <u9jing@gmail.com> |
| 2014-12-30 13:54:12 -0800 |
| Commit: 07fa191, github.com/apache/spark/pull/3442 |
| |
| [SPARK-4935][SQL] When hive.cli.print.header configured, spark-sql aborted if passed in a invalid sql |
| wangfei <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2014-12-30 13:44:30 -0800 |
| Commit: 8f29b7c, github.com/apache/spark/pull/3761 |
| |
| [SPARK-4386] Improve performance when writing Parquet files |
| Michael Davies <Michael.BellDavies@gmail.com> |
| 2014-12-30 13:40:51 -0800 |
| Commit: 7425bec, github.com/apache/spark/pull/3843 |
| |
| [SPARK-4937][SQL] Normalizes conjunctions and disjunctions to eliminate common predicates |
| Cheng Lian <lian@databricks.com> |
| 2014-12-30 13:38:27 -0800 |
| Commit: 61a99f6, github.com/apache/spark/pull/3784 |
| |
| [SPARK-4928][SQL] Fix: Operator '>,<,>=,<=' with decimal between different precision report error |
| guowei2 <guowei2@asiainfo.com> |
| 2014-12-30 12:21:00 -0800 |
| Commit: a75dd83, github.com/apache/spark/pull/3767 |
| |
| [SPARK-4930][SQL][DOCS]Update SQL programming guide, CACHE TABLE is eager |
| luogankun <luogankun@gmail.com> |
| 2014-12-30 12:18:55 -0800 |
| Commit: 2deac74, github.com/apache/spark/pull/3773 |
| |
| [SPARK-4916][SQL][DOCS]Update SQL programming guide about cache section |
| luogankun <luogankun@gmail.com> |
| 2014-12-30 12:17:49 -0800 |
| Commit: f7a41a0, github.com/apache/spark/pull/3759 |
| |
| [SPARK-4493][SQL] Tests for IsNull / IsNotNull in the ParquetFilterSuite |
| Cheng Lian <lian@databricks.com> |
| 2014-12-30 12:16:45 -0800 |
| Commit: 19a8802, github.com/apache/spark/pull/3748 |
| |
| [Spark-4512] [SQL] Unresolved Attribute Exception in Sort By |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-30 12:11:44 -0800 |
| Commit: 53f0a00, github.com/apache/spark/pull/3386 |
| |
| [SPARK-5002][SQL] Using ascending by default when not specify order in order by |
| wangfei <wangfei1@huawei.com> |
| 2014-12-30 12:07:24 -0800 |
| Commit: daac221, github.com/apache/spark/pull/3838 |
| |
| [SPARK-4904] [SQL] Remove the unnecessary code change in Generic UDF |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-30 11:47:08 -0800 |
| Commit: 63b84b7, github.com/apache/spark/pull/3745 |
| |
| [SPARK-4959] [SQL] Attributes are case sensitive when using a select query from a projection |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-30 11:33:47 -0800 |
| Commit: 5595eaa, github.com/apache/spark/pull/3796 |
| |
| [SPARK-4975][SQL] Fix HiveInspectorSuite test failure |
| scwf <wangfei1@huawei.com>, Fei Wang <wangfei1@huawei.com> |
| 2014-12-30 11:30:47 -0800 |
| Commit: 65357f1, github.com/apache/spark/pull/3814 |
| |
| [SQL] enable view test |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-30 11:29:13 -0800 |
| Commit: 94d60b7, github.com/apache/spark/pull/3826 |
| |
| [SPARK-4908][SQL] Prevent multiple concurrent hive native commands |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-30 11:24:46 -0800 |
| Commit: 480bd1d, github.com/apache/spark/pull/3834 |
| |
| [SPARK-4882] Register PythonBroadcast with Kryo so that PySpark works with KryoSerializer |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-30 09:29:52 -0800 |
| Commit: efa80a53, github.com/apache/spark/pull/3831 |
| |
| [SPARK-4920][UI] add version on master and worker page for standalone mode |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-12-30 09:19:47 -0800 |
| Commit: 9077e72, github.com/apache/spark/pull/3769 |
| |
| [SPARK-4972][MLlib] Updated the scala doc for lasso and ridge regression for the change of LeastSquaresGradient |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-29 17:17:12 -0800 |
| Commit: 040d6f2, github.com/apache/spark/pull/3808 |
| |
| Added setMinCount to Word2Vec.scala |
| ganonp <ganonp@gmail.com> |
| 2014-12-29 15:31:19 -0800 |
| Commit: 343db39, github.com/apache/spark/pull/3693 |
| |
| SPARK-4156 [MLLIB] EM algorithm for GMMs |
| Travis Galoppo <tjg2107@columbia.edu>, Travis Galoppo <travis@localhost.localdomain>, tgaloppo <tjg2107@columbia.edu>, FlytxtRnD <meethu.mathew@flytxt.com> |
| 2014-12-29 15:29:15 -0800 |
| Commit: 6cf6fdf, github.com/apache/spark/pull/3022 |
| |
| SPARK-4968: takeOrdered to skip reduce step in case mappers return no partitions |
| Yash Datta <Yash.Datta@guavus.com> |
| 2014-12-29 13:49:45 -0800 |
| Commit: 9bc0df6, github.com/apache/spark/pull/3830 |
| |
| [SPARK-4409][MLlib] Additional Linear Algebra Utils |
| Burak Yavuz <brkyvz@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2014-12-29 13:24:26 -0800 |
| Commit: 02b55de, github.com/apache/spark/pull/3319 |
| |
| [Minor] Fix a typo of type parameter in JavaUtils.scala |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-29 12:05:08 -0800 |
| Commit: 8d72341, github.com/apache/spark/pull/3789 |
| |
| [SPARK-4946] [CORE] Using AkkaUtils.askWithReply in MapOutputTracker.askTracker to reduce the chance of the communicating problem |
| YanTangZhai <hakeemzhai@tencent.com>, yantangzhai <tyz0303@163.com> |
| 2014-12-29 11:30:54 -0800 |
| Commit: 815de54, github.com/apache/spark/pull/3785 |
| |
| Adde LICENSE Header to build/mvn, build/sbt and sbt/sbt |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-29 10:48:53 -0800 |
| Commit: 4cef05e, github.com/apache/spark/pull/3817 |
| |
| [SPARK-4982][DOC] `spark.ui.retainedJobs` description is wrong in Spark UI configuration guide |
| wangxiaojing <u9jing@gmail.com> |
| 2014-12-29 10:45:14 -0800 |
| Commit: 6645e52, github.com/apache/spark/pull/3818 |
| |
| [SPARK-4966][YARN]The MemoryOverhead value is setted not correctly |
| meiyoula <1039320815@qq.com> |
| 2014-12-29 08:20:30 -0600 |
| Commit: 14fa87b, github.com/apache/spark/pull/3797 |
| |
| [SPARK-4501][Core] - Create build/mvn to automatically download maven/zinc/scalac |
| Brennon York <brennon.york@capitalone.com> |
| 2014-12-27 13:25:18 -0800 |
| Commit: a3e51cc, github.com/apache/spark/pull/3707 |
| |
| [SPARK-4952][Core]Handle ConcurrentModificationExceptions in SparkEnv.environmentDetails |
| GuoQiang Li <witgo@qq.com> |
| 2014-12-26 23:31:29 -0800 |
| Commit: 080ceb7, github.com/apache/spark/pull/3788 |
| |
| [SPARK-4954][Core] add spark version infomation in log for standalone mode |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-12-26 23:23:13 -0800 |
| Commit: 786808a, github.com/apache/spark/pull/3790 |
| |
| [SPARK-3955] Different versions between jackson-mapper-asl and jackson-c... |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2014-12-26 22:59:34 -0800 |
| Commit: 2483c1e, github.com/apache/spark/pull/3716 |
| |
| HOTFIX: Slight tweak on previous commit. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-26 22:55:04 -0800 |
| Commit: 82bf4be |
| |
| [SPARK-3787][BUILD] Assembly jar name is wrong when we build with sbt omitting -Dhadoop.version |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-26 22:52:04 -0800 |
| Commit: de95c57, github.com/apache/spark/pull/3046 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-26 22:39:56 -0800 |
| Commit: 534f24b, github.com/apache/spark/pull/3456 |
| |
| SPARK-4971: Fix typo in BlockGenerator comment |
| CodingCat <zhunansjtu@gmail.com> |
| 2014-12-26 12:03:22 -0800 |
| Commit: fda4331, github.com/apache/spark/pull/3807 |
| |
| [SPARK-4608][Streaming] Reorganize StreamingContext implicit to improve API convenience |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-25 19:46:05 -0800 |
| Commit: f9ed2b6, github.com/apache/spark/pull/3464 |
| |
| [SPARK-4537][Streaming] Expand StreamingSource to add more metrics |
| jerryshao <saisai.shao@intel.com> |
| 2014-12-25 19:39:49 -0800 |
| Commit: f205fe4, github.com/apache/spark/pull/3466 |
| |
| [EC2] Update mesos/spark-ec2 branch to branch-1.3 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-12-25 14:16:50 -0800 |
| Commit: ac82785, github.com/apache/spark/pull/3804 |
| |
| [EC2] Update default Spark version to 1.2.0 |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-12-25 14:13:12 -0800 |
| Commit: b6b6393, github.com/apache/spark/pull/3793 |
| |
| Fix "Building Spark With Maven" link in README.md |
| Denny Lee <denny.g.lee@gmail.com> |
| 2014-12-25 14:05:55 -0800 |
| Commit: 08b18c7, github.com/apache/spark/pull/3802 |
| |
| [SPARK-4953][Doc] Fix the description of building Spark with YARN |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-25 07:05:43 -0800 |
| Commit: 11dd993, github.com/apache/spark/pull/3787 |
| |
| [SPARK-4873][Streaming] Use `Future.zip` instead of `Future.flatMap`(for-loop) in WriteAheadLogBasedBlockHandler |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-24 19:49:41 -0800 |
| Commit: b4d0db8, github.com/apache/spark/pull/3721 |
| |
| SPARK-4297 [BUILD] Build warning fixes omnibus |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-24 13:32:51 -0800 |
| Commit: 29fabb1, github.com/apache/spark/pull/3157 |
| |
| [SPARK-4881][Minor] Use SparkConf#getBoolean instead of get().toBoolean |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-23 19:14:34 -0800 |
| Commit: 199e59a, github.com/apache/spark/pull/3733 |
| |
| [SPARK-4860][pyspark][sql] speeding up `sample()` and `takeSample()` |
| jbencook <jbenjamincook@gmail.com>, J. Benjamin Cook <jbenjamincook@gmail.com> |
| 2014-12-23 17:46:24 -0800 |
| Commit: fd41eb9, github.com/apache/spark/pull/3764 |
| |
| [SPARK-4606] Send EOF to child JVM when there's no more data to read. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2014-12-23 16:02:59 -0800 |
| Commit: 7e2deb7, github.com/apache/spark/pull/3460 |
| |
| [SPARK-4671][Streaming]Do not replicate streaming block when WAL is enabled |
| jerryshao <saisai.shao@intel.com> |
| 2014-12-23 15:45:53 -0800 |
| Commit: 3f5f4cc, github.com/apache/spark/pull/3534 |
| |
| [SPARK-4802] [streaming] Remove receiverInfo once receiver is de-registered |
| Ilayaperumal Gopinathan <igopinathan@pivotal.io> |
| 2014-12-23 15:14:54 -0800 |
| Commit: 10d69e9, github.com/apache/spark/pull/3647 |
| |
| [SPARK-4913] Fix incorrect event log path |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-12-23 14:58:33 -0800 |
| Commit: 96281cd, github.com/apache/spark/pull/3755 |
| |
| [SPARK-4730][YARN] Warn against deprecated YARN settings |
| Andrew Or <andrew@databricks.com> |
| 2014-12-23 14:28:36 -0800 |
| Commit: 27c5399, github.com/apache/spark/pull/3590 |
| |
| [SPARK-4914][Build] Cleans lib_managed before compiling with Hive 0.13.1 |
| Cheng Lian <lian@databricks.com> |
| 2014-12-23 12:54:20 -0800 |
| Commit: 395b771, github.com/apache/spark/pull/3756 |
| |
| [SPARK-4932] Add help comments in Analytics |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2014-12-23 12:39:41 -0800 |
| Commit: 9c251c5, github.com/apache/spark/pull/3775 |
| |
| [SPARK-4834] [standalone] Clean up application files after app finishes. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2014-12-23 12:02:08 -0800 |
| Commit: dd15536, github.com/apache/spark/pull/3705 |
| |
| [SPARK-4931][Yarn][Docs] Fix the format of running-on-yarn.md |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-23 11:18:06 -0800 |
| Commit: 2d215ae, github.com/apache/spark/pull/3774 |
| |
| [SPARK-4890] Ignore downloaded EC2 libs |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-12-23 11:12:16 -0800 |
| Commit: 2823c7f, github.com/apache/spark/pull/3770 |
| |
| [Docs] Minor typo fixes |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-12-22 22:54:32 -0800 |
| Commit: 0e532cc, github.com/apache/spark/pull/3772 |
| |
| [SPARK-4907][MLlib] Inconsistent loss and gradient in LeastSquaresGradient compared with R |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-22 16:42:55 -0800 |
| Commit: a96b727, github.com/apache/spark/pull/3746 |
| |
| [SPARK-4818][Core] Add 'iterator' to reduce memory consumed by join |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-22 14:26:28 -0800 |
| Commit: c233ab3, github.com/apache/spark/pull/3671 |
| |
| [SPARK-4920][UI]:current spark version in UI is not striking. |
| genmao.ygm <genmao.ygm@alibaba-inc.com> |
| 2014-12-22 14:14:39 -0800 |
| Commit: de9d7d2, github.com/apache/spark/pull/3763 |
| |
| [Minor] Fix scala doc |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-12-22 14:13:31 -0800 |
| Commit: a61aa66, github.com/apache/spark/pull/3751 |
| |
| [SPARK-4864] Add documentation to Netty-based configs |
| Aaron Davidson <aaron@databricks.com> |
| 2014-12-22 13:09:22 -0800 |
| Commit: fbca6b6, github.com/apache/spark/pull/3713 |
| |
| [SPARK-4079] [CORE] Consolidates Errors if a CompressionCodec is not available |
| Kostas Sakellis <kostas@cloudera.com> |
| 2014-12-22 13:07:01 -0800 |
| Commit: 7c0ed13, github.com/apache/spark/pull/3119 |
| |
| SPARK-4447. Remove layers of abstraction in YARN code no longer needed after dropping yarn-alpha |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-22 12:23:43 -0800 |
| Commit: d62da64, github.com/apache/spark/pull/3652 |
| |
| [SPARK-4733] Add missing prameter comments in ShuffleDependency |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2014-12-22 12:19:23 -0800 |
| Commit: fb8e85e, github.com/apache/spark/pull/3594 |
| |
| [Minor] Improve some code in BroadcastTest for short |
| carlmartin <carlmartinmax@gmail.com> |
| 2014-12-22 12:13:53 -0800 |
| Commit: 1d9788e, github.com/apache/spark/pull/3750 |
| |
| [SPARK-4883][Shuffle] Add a name to the directoryCleaner thread |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-22 12:11:36 -0800 |
| Commit: 8773705, github.com/apache/spark/pull/3734 |
| |
| [SPARK-4870] Add spark version to driver log |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-12-22 11:36:49 -0800 |
| Commit: 39272c8, github.com/apache/spark/pull/3717 |
| |
| [SPARK-4915][YARN] Fix classname to be specified for external shuffle service. |
| Tsuyoshi Ozawa <ozawa.tsuyoshi@lab.ntt.co.jp> |
| 2014-12-22 11:28:05 -0800 |
| Commit: 96606f6, github.com/apache/spark/pull/3757 |
| |
| [SPARK-4918][Core] Reuse Text in saveAsTextFile |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-22 11:20:00 -0800 |
| Commit: 93b2f3a, github.com/apache/spark/pull/3762 |
| |
| [SPARK-2075][Core] Make the compiler generate same bytes code for Hadoop 1.+ and Hadoop 2.+ |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-21 22:10:19 -0800 |
| Commit: 6ee6aa7, github.com/apache/spark/pull/3740 |
| |
| SPARK-4910 [CORE] build failed (use of FileStatus.isFile in Hadoop 1.x) |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-21 13:16:57 -0800 |
| Commit: c6a3c0d, github.com/apache/spark/pull/3754 |
| |
| [Minor] Build Failed: value defaultProperties not found |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2014-12-19 23:32:56 -0800 |
| Commit: a764960, github.com/apache/spark/pull/3749 |
| |
| [SPARK-4140] Document dynamic allocation |
| Andrew Or <andrew@databricks.com>, Tsuyoshi Ozawa <ozawa.tsuyoshi@gmail.com> |
| 2014-12-19 19:36:20 -0800 |
| Commit: 15c03e1, github.com/apache/spark/pull/3731 |
| |
| [SPARK-4831] Do not include SPARK_CLASSPATH if empty |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2014-12-19 19:32:39 -0800 |
| Commit: 7cb3f54, github.com/apache/spark/pull/3678 |
| |
| SPARK-2641: Passing num executors to spark arguments from properties file |
| Kanwaljit Singh <kanwaljit.singh@guavus.com> |
| 2014-12-19 19:25:39 -0800 |
| Commit: 1d64812, github.com/apache/spark/pull/1657 |
| |
| [SPARK-3060] spark-shell.cmd doesn't accept application options in Windows OS |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2014-12-19 19:19:53 -0800 |
| Commit: 8d93247, github.com/apache/spark/pull/3350 |
| |
| change signature of example to match released code |
| Eran Medan <ehrann.mehdan@gmail.com> |
| 2014-12-19 18:29:36 -0800 |
| Commit: c25c669, github.com/apache/spark/pull/3747 |
| |
| [SPARK-2261] Make event logger use a single file. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2014-12-19 18:21:15 -0800 |
| Commit: 4564519, github.com/apache/spark/pull/1222 |
| |
| [SPARK-4890] Upgrade Boto to 2.34.0; automatically download Boto from PyPi instead of packaging it |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-19 17:02:37 -0800 |
| Commit: c28083f, github.com/apache/spark/pull/3737 |
| |
| [SPARK-4896] donāt redundantly overwrite executor JAR deps |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2014-12-19 15:24:41 -0800 |
| Commit: 7981f96, github.com/apache/spark/pull/2848 |
| |
| [SPARK-4889] update history server example cmds |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2014-12-19 13:56:04 -0800 |
| Commit: cdb2c64, github.com/apache/spark/pull/3736 |
| |
| Small refactoring to pass SparkEnv into Executor rather than creating SparkEnv in Executor. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-19 12:51:12 -0800 |
| Commit: 336cd34, github.com/apache/spark/pull/3738 |
| |
| [Build] Remove spark-staging-1038 |
| scwf <wangfei1@huawei.com> |
| 2014-12-19 08:29:38 -0800 |
| Commit: 8e253eb, github.com/apache/spark/pull/3743 |
| |
| [SPARK-4901] [SQL] Hot fix for ByteWritables.copyBytes |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-19 08:04:41 -0800 |
| Commit: 5479450, github.com/apache/spark/pull/3742 |
| |
| SPARK-3428. TaskMetrics for running tasks is missing GC time metrics |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-18 22:40:44 -0800 |
| Commit: 283263f, github.com/apache/spark/pull/3684 |
| |
| [SPARK-4674] Refactor getCallSite |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-12-18 21:41:02 -0800 |
| Commit: d7fc69a, github.com/apache/spark/pull/3532 |
| |
| [SPARK-4728][MLLib] Add exponential, gamma, and log normal sampling to MLlib da... |
| RJ Nowling <rnowling@gmail.com> |
| 2014-12-18 21:00:49 -0800 |
| Commit: ee1fb97, github.com/apache/spark/pull/3680 |
| |
| [SPARK-4861][SQL] Refactory command in spark sql |
| wangfei <wangfei1@huawei.com>, scwf <wangfei1@huawei.com> |
| 2014-12-18 20:24:56 -0800 |
| Commit: c3d91da, github.com/apache/spark/pull/3712 |
| |
| [SPARK-4573] [SQL] Add SettableStructObjectInspector support in "wrap" function |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-18 20:21:52 -0800 |
| Commit: ae9f128, github.com/apache/spark/pull/3429 |
| |
| [SPARK-2554][SQL] Supporting SumDistinct partial aggregation |
| ravipesala <ravindra.pesala@huawei.com> |
| 2014-12-18 20:19:10 -0800 |
| Commit: 7687415, github.com/apache/spark/pull/3348 |
| |
| [SPARK-4693] [SQL] PruningPredicates may be wrong if predicates contains an empty AttributeSet() references |
| YanTangZhai <hakeemzhai@tencent.com>, yantangzhai <tyz0303@163.com> |
| 2014-12-18 20:13:46 -0800 |
| Commit: e7de7e5, github.com/apache/spark/pull/3556 |
| |
| [SPARK-4756][SQL] FIX: sessionToActivePool grow infinitely, even as sessions expire |
| guowei2 <guowei2@asiainfo.com> |
| 2014-12-18 20:10:23 -0800 |
| Commit: 22ddb6e, github.com/apache/spark/pull/3617 |
| |
| [SPARK-3928][SQL] Support wildcard matches on Parquet files. |
| Thu Kyaw <trk007@gmail.com> |
| 2014-12-18 20:08:32 -0800 |
| Commit: b68bc6d, github.com/apache/spark/pull/3407 |
| |
| [SPARK-2663] [SQL] Support the Grouping Set |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-18 18:58:29 -0800 |
| Commit: f728e0f, github.com/apache/spark/pull/1567 |
| |
| [SPARK-4754] Refactor SparkContext into ExecutorAllocationClient |
| Andrew Or <andrew@databricks.com> |
| 2014-12-18 17:37:42 -0800 |
| Commit: 9804a75, github.com/apache/spark/pull/3614 |
| |
| [SPARK-4837] NettyBlockTransferService should use spark.blockManager.port config |
| Aaron Davidson <aaron@databricks.com> |
| 2014-12-18 16:43:16 -0800 |
| Commit: 105293a, github.com/apache/spark/pull/3688 |
| |
| SPARK-4743 - Use SparkEnv.serializer instead of closureSerializer in aggregateByKey and foldByKey |
| Ivan Vergiliev <ivan@leanplum.com> |
| 2014-12-18 16:29:36 -0800 |
| Commit: f9f58b9, github.com/apache/spark/pull/3605 |
| |
| [SPARK-4884]: Improve Partition docs |
| Madhu Siddalingaiah <madhu@madhu.com> |
| 2014-12-18 16:00:53 -0800 |
| Commit: d5a596d, github.com/apache/spark/pull/3722 |
| |
| [SPARK-4880] remove spark.locality.wait in Analytics |
| Ernest <earneyzxl@gmail.com> |
| 2014-12-18 15:42:26 -0800 |
| Commit: a7ed6f3, github.com/apache/spark/pull/3730 |
| |
| [SPARK-4887][MLlib] Fix a bad unittest in LogisticRegressionSuite |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-18 13:55:49 -0800 |
| Commit: 59a49db, github.com/apache/spark/pull/3735 |
| |
| [SPARK-3607] ConnectionManager threads.max configs on the thread pools don't work |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2014-12-18 12:53:18 -0800 |
| Commit: 3720057, github.com/apache/spark/pull/3664 |
| |
| Add mesos specific configurations into doc |
| Timothy Chen <tnachen@gmail.com> |
| 2014-12-18 12:15:53 -0800 |
| Commit: d9956f8, github.com/apache/spark/pull/3349 |
| |
| SPARK-3779. yarn spark.yarn.applicationMaster.waitTries config should be... |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-18 12:19:07 -0600 |
| Commit: 253b72b, github.com/apache/spark/pull/3471 |
| |
| [SPARK-4461][YARN] pass extra java options to yarn application master |
| Zhan Zhang <zhazhan@gmail.com> |
| 2014-12-18 10:01:46 -0600 |
| Commit: 3b76469, github.com/apache/spark/pull/3409 |
| |
| [SPARK-4822] Use sphinx tags for Python doc annotations |
| lewuathe <lewuathe@me.com> |
| 2014-12-17 17:31:24 -0800 |
| Commit: 3cd5161, github.com/apache/spark/pull/3685 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-17 15:50:10 -0800 |
| Commit: ca12608, github.com/apache/spark/pull/3137 |
| |
| [SPARK-3891][SQL] Add array support to percentile, percentile_approx and constant inspectors support |
| Venkata Ramana G <ramana.gollamudihuawei.com>, Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com> |
| 2014-12-17 15:41:35 -0800 |
| Commit: f33d550, github.com/apache/spark/pull/2802 |
| |
| [SPARK-4856] [SQL] NullType instead of StringType when sampling against empty string or nul... |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-17 15:01:59 -0800 |
| Commit: 8d0d2a6, github.com/apache/spark/pull/3708 |
| |
| [HOTFIX][SQL] Fix parquet filter suite |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-17 14:27:02 -0800 |
| Commit: 19c0faa, github.com/apache/spark/pull/3727 |
| |
| [SPARK-4821] [mllib] [python] [docs] Fix for pyspark.mllib.rand doc |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-12-17 14:12:46 -0800 |
| Commit: affc3f4, github.com/apache/spark/pull/3669 |
| |
| [SPARK-3739] [SQL] Update the split num base on block size for table scanning |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-17 13:39:36 -0800 |
| Commit: 636d9fc, github.com/apache/spark/pull/2589 |
| |
| [SPARK-4755] [SQL] sqrt(negative value) should return null |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-17 12:51:27 -0800 |
| Commit: 902e4d5, github.com/apache/spark/pull/3616 |
| |
| [SPARK-4493][SQL] Don't pushdown Eq, NotEq, Lt, LtEq, Gt and GtEq predicates with nulls for Parquet |
| Cheng Lian <lian@databricks.com> |
| 2014-12-17 12:48:04 -0800 |
| Commit: 6277135, github.com/apache/spark/pull/3367 |
| |
| [SPARK-3698][SQL] Fix case insensitive resolution of GetField. |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-17 12:43:51 -0800 |
| Commit: 7ad579e, github.com/apache/spark/pull/3724 |
| |
| [SPARK-4694]Fix HiveThriftServer2 cann't stop In Yarn HA mode. |
| carlmartin <carlmartinmax@gmail.com> |
| 2014-12-17 12:24:03 -0800 |
| Commit: 4782def, github.com/apache/spark/pull/3576 |
| |
| [SPARK-4625] [SQL] Add sort by for DSL & SimpleSqlParser |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-17 12:01:57 -0800 |
| Commit: 5fdcbdc, github.com/apache/spark/pull/3481 |
| |
| [SPARK-4595][Core] Fix MetricsServlet not work issue |
| Saisai Shao <saisai.shao@intel.com>, Josh Rosen <joshrosen@databricks.com>, jerryshao <saisai.shao@intel.com> |
| 2014-12-17 11:47:44 -0800 |
| Commit: cf50631, github.com/apache/spark/pull/3444 |
| |
| [HOTFIX] Fix RAT exclusion for known_translations file |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-16 23:00:25 -0800 |
| Commit: 3d0c37b, github.com/apache/spark/pull/3719 |
| |
| [Release] Update contributors list format and sort it |
| Andrew Or <andrew@databricks.com> |
| 2014-12-16 22:11:03 -0800 |
| Commit: 4e1112e |
| |
| [SPARK-4618][SQL] Make foreign DDL commands options case-insensitive |
| scwf <wangfei1@huawei.com>, wangfei <wangfei1@huawei.com> |
| 2014-12-16 21:26:36 -0800 |
| Commit: 6069880, github.com/apache/spark/pull/3470 |
| |
| [SPARK-4866] support StructType as key in MapType |
| Davies Liu <davies@databricks.com> |
| 2014-12-16 21:23:28 -0800 |
| Commit: ec5c427, github.com/apache/spark/pull/3714 |
| |
| [SPARK-4375] [SQL] Add 0 argument support for udf |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-16 21:21:11 -0800 |
| Commit: 770d815, github.com/apache/spark/pull/3595 |
| |
| [SPARK-4720][SQL] Remainder should also return null if the divider is 0. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-12-16 21:19:57 -0800 |
| Commit: ddc7ba3, github.com/apache/spark/pull/3581 |
| |
| [SPARK-4744] [SQL] Short circuit evaluation for AND & OR in CodeGen |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-16 21:18:39 -0800 |
| Commit: 0aa834a, github.com/apache/spark/pull/3606 |
| |
| [SPARK-4798][SQL] A new set of Parquet testing API and test suites |
| Cheng Lian <lian@databricks.com> |
| 2014-12-16 21:16:03 -0800 |
| Commit: 3b395e1, github.com/apache/spark/pull/3644 |
| |
| [Release] Cache known author translations locally |
| Andrew Or <andrew@databricks.com> |
| 2014-12-16 19:28:43 -0800 |
| Commit: b85044e |
| |
| [Release] Major improvements to generate contributors script |
| Andrew Or <andrew@databricks.com> |
| 2014-12-16 17:55:27 -0800 |
| Commit: 6f80b74 |
| |
| [SPARK-4269][SQL] make wait time configurable in BroadcastHashJoin |
| Jacky Li <jacky.likun@huawei.com> |
| 2014-12-16 15:34:59 -0800 |
| Commit: fa66ef6, github.com/apache/spark/pull/3133 |
| |
| [SPARK-4827][SQL] Fix resolution of deeply nested Project(attr, Project(Star,...)). |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-16 15:31:19 -0800 |
| Commit: a66c23e, github.com/apache/spark/pull/3674 |
| |
| [SPARK-4483][SQL]Optimization about reduce memory costs during the HashOuterJoin |
| tianyi <tianyi@asiainfo-linkage.com>, tianyi <tianyi.asiainfo@gmail.com> |
| 2014-12-16 15:22:29 -0800 |
| Commit: 30f6b85, github.com/apache/spark/pull/3375 |
| |
| [SPARK-4527][SQl]Add BroadcastNestedLoopJoin operator selection testsuite |
| wangxiaojing <u9jing@gmail.com> |
| 2014-12-16 14:45:56 -0800 |
| Commit: ea1315e, github.com/apache/spark/pull/3395 |
| |
| SPARK-4767: Add support for launching in a specified placement group to spark_ec2 |
| Holden Karau <holden@pigscanfly.ca> |
| 2014-12-16 14:37:04 -0800 |
| Commit: b0dfdbd, github.com/apache/spark/pull/3623 |
| |
| [SPARK-4812][SQL] Fix the initialization issue of 'codegenEnabled' |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-16 14:13:40 -0800 |
| Commit: 6530243, github.com/apache/spark/pull/3660 |
| |
| [SPARK-4847][SQL]Fix "extraStrategies cannot take effect in SQLContext" issue |
| jerryshao <saisai.shao@intel.com> |
| 2014-12-16 14:08:28 -0800 |
| Commit: dc8280d, github.com/apache/spark/pull/3698 |
| |
| [DOCS][SQL] Add a Note on jsonFile having separate JSON objects per line |
| Peter Vandenabeele <peter@vandenabeele.com> |
| 2014-12-16 13:57:55 -0800 |
| Commit: 1a9e35e, github.com/apache/spark/pull/3517 |
| |
| [SQL] SPARK-4700: Add HTTP protocol spark thrift server |
| Judy Nash <judynash@microsoft.com>, judynash <judynash@microsoft.com> |
| 2014-12-16 12:37:26 -0800 |
| Commit: 17688d1, github.com/apache/spark/pull/3672 |
| |
| [SPARK-3405] add subnet-id and vpc-id options to spark_ec2.py |
| Mike Jennings <mvj101@gmail.com>, Mike Jennings <mvj@google.com> |
| 2014-12-16 12:13:21 -0800 |
| Commit: d12c071, github.com/apache/spark/pull/2872 |
| |
| [SPARK-4855][mllib] testing the Chi-squared hypothesis test |
| jbencook <jbenjamincook@gmail.com> |
| 2014-12-16 11:37:23 -0800 |
| Commit: cb48447, github.com/apache/spark/pull/3679 |
| |
| [SPARK-4437] update doc for WholeCombineFileRecordReader |
| Davies Liu <davies@databricks.com>, Josh Rosen <joshrosen@databricks.com> |
| 2014-12-16 11:19:36 -0800 |
| Commit: ed36200, github.com/apache/spark/pull/3301 |
| |
| [SPARK-4841] fix zip with textFile() |
| Davies Liu <davies@databricks.com> |
| 2014-12-15 22:58:26 -0800 |
| Commit: c246b95, github.com/apache/spark/pull/3706 |
| |
| [SPARK-4792] Add error message when making local dir unsuccessfully |
| meiyoula <1039320815@qq.com> |
| 2014-12-15 22:30:18 -0800 |
| Commit: c762877, github.com/apache/spark/pull/3635 |
| |
| SPARK-4814 [CORE] Enable assertions in SBT, Maven tests / AssertionError from Hive's LazyBinaryInteger |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-15 17:12:05 -0800 |
| Commit: 81112e4, github.com/apache/spark/pull/3692 |
| |
| [Minor][Core] fix comments in MapOutputTracker |
| wangfei <wangfei1@huawei.com> |
| 2014-12-15 16:46:21 -0800 |
| Commit: 5c24759, github.com/apache/spark/pull/3700 |
| |
| SPARK-785 [CORE] ClosureCleaner not invoked on most PairRDDFunctions |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-15 16:06:15 -0800 |
| Commit: 2a28bc6, github.com/apache/spark/pull/3690 |
| |
| [SPARK-4668] Fix some documentation typos. |
| Ryan Williams <ryan.blake.williams@gmail.com> |
| 2014-12-15 14:52:17 -0800 |
| Commit: 8176b7a, github.com/apache/spark/pull/3523 |
| |
| [SPARK-1037] The name of findTaskFromList & findTask in TaskSetManager.scala is confusing |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2014-12-15 14:51:15 -0800 |
| Commit: 38703bb, github.com/apache/spark/pull/3665 |
| |
| [SPARK-4826] Fix generation of temp file names in WAL tests |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-15 14:33:43 -0800 |
| Commit: f6b8591, github.com/apache/spark/pull/3695. |
| |
| [SPARK-4494][mllib] IDFModel.transform() add support for single vector |
| Yuu ISHIKAWA <yuu.ishikawa@gmail.com> |
| 2014-12-15 13:44:15 -0800 |
| Commit: 8098fab, github.com/apache/spark/pull/3603 |
| |
| HOTFIX: Disabling failing block manager test |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-15 10:54:45 -0800 |
| Commit: 4c06738 |
| |
| fixed spelling errors in documentation |
| Peter Klipfel <peter@klipfel.me> |
| 2014-12-14 00:01:16 -0800 |
| Commit: 2a2983f, github.com/apache/spark/pull/3691 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-11 23:38:40 -0800 |
| Commit: ef84dab, github.com/apache/spark/pull/3488 |
| |
| [SPARK-4829] [SQL] add rule to fold count(expr) if expr is not null |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-11 22:56:42 -0800 |
| Commit: 41a3f93, github.com/apache/spark/pull/3676 |
| |
| [SPARK-4742][SQL] The name of Parquet File generated by AppendingParquetOutputFormat should be zero padded |
| Sasaki Toru <sasakitoa@nttdata.co.jp> |
| 2014-12-11 22:54:21 -0800 |
| Commit: 8091dd6, github.com/apache/spark/pull/3602 |
| |
| [SPARK-4825] [SQL] CTAS fails to resolve when created using saveAsTable |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-11 22:51:49 -0800 |
| Commit: 0abbff2, github.com/apache/spark/pull/3673 |
| |
| [SQL] enable empty aggr test case |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-11 22:50:18 -0800 |
| Commit: cbb634a, github.com/apache/spark/pull/3445 |
| |
| [SPARK-4828] [SQL] sum and avg on empty table should always return null |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-11 22:49:27 -0800 |
| Commit: acb3be6, github.com/apache/spark/pull/3675 |
| |
| [SQL] Remove unnecessary case in HiveContext.toHiveString |
| scwf <wangfei1@huawei.com> |
| 2014-12-11 22:48:03 -0800 |
| Commit: d8cf678, github.com/apache/spark/pull/3563 |
| |
| [SPARK-4293][SQL] Make Cast be able to handle complex types. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-12-11 22:45:25 -0800 |
| Commit: 3344803, github.com/apache/spark/pull/3150 |
| |
| [SPARK-4639] [SQL] Pass maxIterations in as a parameter in Analyzer |
| Jacky Li <jacky.likun@huawei.com> |
| 2014-12-11 22:44:27 -0800 |
| Commit: c152dde, github.com/apache/spark/pull/3499 |
| |
| [SPARK-4662] [SQL] Whitelist more unittest |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-11 22:43:02 -0800 |
| Commit: a7f07f5, github.com/apache/spark/pull/3522 |
| |
| [SPARK-4713] [SQL] SchemaRDD.unpersist() should not raise exception if it is not persisted |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-11 22:41:36 -0800 |
| Commit: bf40cf8, github.com/apache/spark/pull/3572 |
| |
| [SPARK-4806] Streaming doc update for 1.2 |
| Tathagata Das <tathagata.das1565@gmail.com>, Josh Rosen <joshrosen@databricks.com>, Josh Rosen <rosenville@gmail.com> |
| 2014-12-11 06:21:23 -0800 |
| Commit: b004150, github.com/apache/spark/pull/3653 |
| |
| [SPARK-4791] [sql] Infer schema from case class with multiple constructors |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-12-10 23:41:15 -0800 |
| Commit: 2a5b5fd, github.com/apache/spark/pull/3646 |
| |
| [CORE]codeStyle: uniform ConcurrentHashMap define in StorageLevel.scala with other places |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-12-10 20:44:59 -0800 |
| Commit: 57d37f9, github.com/apache/spark/pull/2793 |
| |
| SPARK-3526 Add section about data locality to the tuning guide |
| Andrew Ash <andrew@andrewash.com> |
| 2014-12-10 15:01:15 -0800 |
| Commit: 652b781, github.com/apache/spark/pull/2519 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-10 14:41:16 -0800 |
| Commit: 36bdb5b, github.com/apache/spark/pull/2883 |
| |
| [SPARK-4759] Fix driver hanging from coalescing partitions |
| Andrew Or <andrew@databricks.com> |
| 2014-12-10 14:27:53 -0800 |
| Commit: 4f93d0c, github.com/apache/spark/pull/3633 |
| |
| [SPARK-4569] Rename 'externalSorting' in Aggregator |
| Ilya Ganelin <ilya.ganelin@capitalone.com> |
| 2014-12-10 14:19:37 -0800 |
| Commit: 447ae2d, github.com/apache/spark/pull/3666 |
| |
| [SPARK-4793] [Deploy] ensure .jar at end of line |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-10 13:29:27 -0800 |
| Commit: e230da1, github.com/apache/spark/pull/3641 |
| |
| [SPARK-4215] Allow requesting / killing executors only in YARN mode |
| Andrew Or <andrew@databricks.com> |
| 2014-12-10 12:48:24 -0800 |
| Commit: faa8fd8, github.com/apache/spark/pull/3615 |
| |
| [SPARK-4771][Docs] Document standalone cluster supervise mode |
| Andrew Or <andrew@databricks.com> |
| 2014-12-10 12:41:36 -0800 |
| Commit: 5621283, github.com/apache/spark/pull/3627 |
| |
| [SPARK-4329][WebUI] HistoryPage pagenation |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-10 12:29:00 -0800 |
| Commit: 0fc637b, github.com/apache/spark/pull/3194 |
| |
| [SPARK-4161]Spark shell class path is not correctly set if "spark.driver.extraClassPath" is set in defaults.conf |
| GuoQiang Li <witgo@qq.com> |
| 2014-12-10 12:24:04 -0800 |
| Commit: 742e709, github.com/apache/spark/pull/3050 |
| |
| [SPARK-4772] Clear local copies of accumulators as soon as we're done with them |
| Nathan Kronenfeld <nkronenfeld@oculusinfo.com> |
| 2014-12-09 23:53:17 -0800 |
| Commit: 94b377f, github.com/apache/spark/pull/3570 |
| |
| [Minor] Use <sup> tag for help icon in web UI page header |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-12-09 23:47:05 -0800 |
| Commit: f79c1cf, github.com/apache/spark/pull/3659 |
| |
| Config updates for the new shuffle transport. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-09 19:29:09 -0800 |
| Commit: 9bd9334, github.com/apache/spark/pull/3657 |
| |
| [SPARK-4740] Create multiple concurrent connections between two peer nodes in Netty. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-09 17:49:59 -0800 |
| Commit: 2b9b726, github.com/apache/spark/pull/3625 |
| |
| SPARK-4805 [CORE] BlockTransferMessage.toByteArray() trips assertion |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-09 16:38:27 -0800 |
| Commit: d8f84f2, github.com/apache/spark/pull/3650 |
| |
| SPARK-4567. Make SparkJobInfo and SparkStageInfo serializable |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-09 16:26:07 -0800 |
| Commit: 5e4c06f, github.com/apache/spark/pull/3426 |
| |
| [SPARK-4714] BlockManager.dropFromMemory() should check whether block has been removed after synchronizing on BlockInfo instance. |
| hushan[č”ē] <hushan@xiaomi.com> |
| 2014-12-09 15:11:20 -0800 |
| Commit: 30dca92, github.com/apache/spark/pull/3574 |
| |
| [SPARK-4765] Make GC time always shown in UI. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-12-09 15:10:36 -0800 |
| Commit: 1f51106, github.com/apache/spark/pull/3622 |
| |
| [SPARK-4691][shuffle] Restructure a few lines in shuffle code |
| maji2014 <maji3@asiainfo.com> |
| 2014-12-09 13:13:12 -0800 |
| Commit: b310744, github.com/apache/spark/pull/3553 |
| |
| [SPARK-874] adding a --wait flag |
| jbencook <jbenjamincook@gmail.com> |
| 2014-12-09 12:16:19 -0800 |
| Commit: 61f1a70, github.com/apache/spark/pull/3567 |
| |
| SPARK-4338. [YARN] Ditch yarn-alpha. |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-09 11:02:43 -0800 |
| Commit: 912563a, github.com/apache/spark/pull/3215 |
| |
| [SPARK-4785][SQL] Initilize Hive UDFs on the driver and serialize them with a wrapper |
| Cheng Hao <hao.cheng@intel.com>, Cheng Lian <lian@databricks.com> |
| 2014-12-09 10:28:15 -0800 |
| Commit: 383c555, github.com/apache/spark/pull/3640 |
| |
| [SPARK-3154][STREAMING] Replace ConcurrentHashMap with mutable.HashMap and remove @volatile from 'stopped' |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-08 23:54:15 -0800 |
| Commit: bcb5cda, github.com/apache/spark/pull/3634 |
| |
| [SPARK-4769] [SQL] CTAS does not work when reading from temporary tables |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-12-08 17:39:12 -0800 |
| Commit: 51b1fe1, github.com/apache/spark/pull/3336 |
| |
| [SQL] remove unnecessary import in spark-sql |
| Jacky Li <jacky.likun@huawei.com> |
| 2014-12-08 17:27:46 -0800 |
| Commit: 9443843, github.com/apache/spark/pull/3630 |
| |
| SPARK-4770. [DOC] [YARN] spark.scheduler.minRegisteredResourcesRatio doc... |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-12-08 16:28:36 -0800 |
| Commit: cda94d1, github.com/apache/spark/pull/3624 |
| |
| SPARK-3926 [CORE] Reopened: result of JavaRDD collectAsMap() is not serializable |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-08 16:13:03 -0800 |
| Commit: e829bfa, github.com/apache/spark/pull/3587 |
| |
| [SPARK-4750] Dynamic allocation - synchronize kills |
| Andrew Or <andrew@databricks.com> |
| 2014-12-08 16:02:33 -0800 |
| Commit: 65f929d, github.com/apache/spark/pull/3612 |
| |
| [SPARK-4774] [SQL] Makes HiveFromSpark more portable |
| Kostas Sakellis <kostas@cloudera.com> |
| 2014-12-08 15:44:18 -0800 |
| Commit: d6a972b, github.com/apache/spark/pull/3628 |
| |
| [SPARK-4764] Ensure that files are fetched atomically |
| Christophe PrƩaud <christophe.preaud@kelkoo.com> |
| 2014-12-08 11:44:54 -0800 |
| Commit: ab2abcb, github.com/apache/spark/pull/2855 |
| |
| [SPARK-4620] Add unpersist in Graph and GraphImpl |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2014-12-07 19:42:02 -0800 |
| Commit: 8817fc7, github.com/apache/spark/pull/3476 |
| |
| [SPARK-4646] Replace Scala.util.Sorting.quickSort with Sorter(TimSort) in Spark |
| Takeshi Yamamuro <linguin.m.s@gmail.com> |
| 2014-12-07 19:36:08 -0800 |
| Commit: 2e6b736, github.com/apache/spark/pull/3507 |
| |
| [SPARK-3623][GraphX] GraphX should support the checkpoint operation |
| GuoQiang Li <witgo@qq.com> |
| 2014-12-06 00:56:51 -0800 |
| Commit: e895e0c, github.com/apache/spark/pull/2631 |
| |
| Streaming doc : do you mean inadvertently? |
| CrazyJvm <crazyjvm@gmail.com> |
| 2014-12-05 13:42:13 -0800 |
| Commit: 6eb1b6f, github.com/apache/spark/pull/3620 |
| |
| [SPARK-4005][CORE] handle message replies in receive instead of in the individual private methods |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-12-05 12:00:32 -0800 |
| Commit: 98a7d09, github.com/apache/spark/pull/2853 |
| |
| [SPARK-4761][SQL] Enables Kryo by default in Spark SQL Thrift server |
| Cheng Lian <lian@databricks.com> |
| 2014-12-05 10:27:40 -0800 |
| Commit: 6f61e1f, github.com/apache/spark/pull/3621 |
| |
| [SPARK-4753][SQL] Use catalyst for partition pruning in newParquet. |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-04 22:25:21 -0800 |
| Commit: f5801e8, github.com/apache/spark/pull/3613 |
| |
| Revert "SPARK-2624 add datanucleus jars to the container in yarn-cluster" |
| Andrew Or <andrew@databricks.com> |
| 2014-12-04 21:53:49 -0800 |
| Commit: fd85253 |
| |
| Revert "[HOT FIX] [YARN] Check whether `/lib` exists before listing its files" |
| Andrew Or <andrew@databricks.com> |
| 2014-12-04 21:53:38 -0800 |
| Commit: 87437df |
| |
| [SPARK-4464] Description about configuration options need to be modified in docs. |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2014-12-04 19:33:02 -0800 |
| Commit: ca37903, github.com/apache/spark/pull/3329 |
| |
| Fix typo in Spark SQL docs. |
| Andy Konwinski <andykonwinski@gmail.com> |
| 2014-12-04 18:27:02 -0800 |
| Commit: 15cf3b0, github.com/apache/spark/pull/3611 |
| |
| [SPARK-4421] Wrong link in spark-standalone.html |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2014-12-04 18:14:36 -0800 |
| Commit: ddfc09c, github.com/apache/spark/pull/3279 |
| |
| [SPARK-4397] Move object RDD to the front of RDD.scala. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-04 16:32:20 -0800 |
| Commit: ed92b47, github.com/apache/spark/pull/3580 |
| |
| [SPARK-4652][DOCS] Add docs about spark-git-repo option |
| lewuathe <lewuathe@me.com>, Josh Rosen <joshrosen@databricks.com> |
| 2014-12-04 15:14:36 -0800 |
| Commit: ab8177d, github.com/apache/spark/pull/3513 |
| |
| [SPARK-4459] Change groupBy type parameter from K to U |
| Saldanha <saldaal1@phusca-l24858.wlan.na.novartis.net> |
| 2014-12-04 14:22:09 -0800 |
| Commit: 743a889, github.com/apache/spark/pull/3327 |
| |
| [SPARK-4745] Fix get_existing_cluster() function with multiple security groups |
| alexdebrie <alexdebrie1@gmail.com> |
| 2014-12-04 14:13:59 -0800 |
| Commit: 794f3ae, github.com/apache/spark/pull/3596 |
| |
| [HOTFIX] Fixing two issues with the release script. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-04 12:11:41 -0800 |
| Commit: 8dae26f, github.com/apache/spark/pull/3608 |
| |
| [SPARK-4253] Ignore spark.driver.host in yarn-cluster and standalone-cluster modes |
| WangTaoTheTonic <barneystinson@aliyun.com>, WangTao <barneystinson@aliyun.com> |
| 2014-12-04 11:52:47 -0800 |
| Commit: 8106b1e, github.com/apache/spark/pull/3112 |
| |
| [SPARK-4683][SQL] Add a beeline.cmd to run on Windows |
| Cheng Lian <lian@databricks.com> |
| 2014-12-04 10:21:03 -0800 |
| Commit: 28c7aca, github.com/apache/spark/pull/3599 |
| |
| [FIX][DOC] Fix broken links in ml-guide.md |
| Xiangrui Meng <meng@databricks.com> |
| 2014-12-04 20:16:35 +0800 |
| Commit: 7e758d7, github.com/apache/spark/pull/3601 |
| |
| [SPARK-4575] [mllib] [docs] spark.ml pipelines doc + bug fixes |
| Joseph K. Bradley <joseph@databricks.com>, jkbradley <joseph.kurata.bradley@gmail.com>, Xiangrui Meng <meng@databricks.com> |
| 2014-12-04 17:00:06 +0800 |
| Commit: 469a6e5, github.com/apache/spark/pull/3588 |
| |
| [docs] Fix outdated comment in tuning guide |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-12-04 00:59:32 -0800 |
| Commit: 529439b, github.com/apache/spark/pull/3592 |
| |
| [SQL] Minor: Avoid calling Seq#size in a loop |
| Aaron Davidson <aaron@databricks.com> |
| 2014-12-04 00:58:42 -0800 |
| Commit: c6c7165, github.com/apache/spark/pull/3593 |
| |
| [SPARK-4685] Include all spark.ml and spark.mllib packages in JavaDoc's MLlib group |
| lewuathe <lewuathe@me.com>, Xiangrui Meng <meng@databricks.com> |
| 2014-12-04 16:51:41 +0800 |
| Commit: 20bfea4, github.com/apache/spark/pull/3554 |
| |
| [SPARK-4719][API] Consolidate various narrow dep RDD classes with MapPartitionsRDD |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-04 00:45:57 -0800 |
| Commit: c3ad486, github.com/apache/spark/pull/3578 |
| |
| [SQL] remove unnecessary import |
| Jacky Li <jacky.likun@huawei.com> |
| 2014-12-04 00:43:55 -0800 |
| Commit: ed88db4, github.com/apache/spark/pull/3585 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-03 22:15:46 -0800 |
| Commit: 3cdae03, github.com/apache/spark/pull/1875 |
| |
| [Release] Correctly translate contributors name in release notes |
| Andrew Or <andrew@databricks.com> |
| 2014-12-03 19:08:29 -0800 |
| Commit: a4dfb4e |
| |
| [SPARK-4580] [SPARK-4610] [mllib] [docs] Documentation for tree ensembles + DecisionTree API fix |
| Joseph K. Bradley <joseph@databricks.com>, Joseph K. Bradley <joseph.kurata.bradley@gmail.com> |
| 2014-12-04 09:57:50 +0800 |
| Commit: 657a888, github.com/apache/spark/pull/3461 |
| |
| [SPARK-4711] [mllib] [docs] Programming guide advice on choosing optimizer |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-12-04 08:58:03 +0800 |
| Commit: 27ab0b8, github.com/apache/spark/pull/3569 |
| |
| [SPARK-4085] Propagate FetchFailedException when Spark fails to read local shuffle file. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-03 16:28:24 -0800 |
| Commit: 1826372, github.com/apache/spark/pull/3579 |
| |
| [SPARK-4498][core] Don't transition ExecutorInfo to RUNNING until Driver adds Executor |
| Mark Hamstra <markhamstra@gmail.com> |
| 2014-12-03 15:08:01 -0800 |
| Commit: 96b2785, github.com/apache/spark/pull/3550 |
| |
| [SPARK-4552][SQL] Avoid exception when reading empty parquet data through Hive |
| Michael Armbrust <michael@databricks.com> |
| 2014-12-03 14:13:35 -0800 |
| Commit: 513ef82, github.com/apache/spark/pull/3586 |
| |
| [HOT FIX] [YARN] Check whether `/lib` exists before listing its files |
| Andrew Or <andrew@databricks.com> |
| 2014-12-03 13:56:23 -0800 |
| Commit: 90ec643, github.com/apache/spark/pull/3589 |
| |
| [SPARK-4642] Add description about spark.yarn.queue to running-on-YARN document. |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2014-12-03 13:16:24 -0800 |
| Commit: 692f493, github.com/apache/spark/pull/3500 |
| |
| [SPARK-4715][Core] Make sure tryToAcquire won't return a negative value |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-03 12:19:40 -0800 |
| Commit: edd3cd4, github.com/apache/spark/pull/3575 |
| |
| [SPARK-4701] Typo in sbt/sbt |
| Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp> |
| 2014-12-03 12:08:00 -0800 |
| Commit: 96786e3, github.com/apache/spark/pull/3560 |
| |
| SPARK-2624 add datanucleus jars to the container in yarn-cluster |
| Jim Lim <jim@quixey.com> |
| 2014-12-03 11:16:02 -0800 |
| Commit: a975dc3, github.com/apache/spark/pull/3238 |
| |
| [SPARK-4717][MLlib] Optimize BLAS library to avoid de-reference multiple times in loop |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-03 22:31:39 +0800 |
| Commit: d005429, github.com/apache/spark/pull/3577 |
| |
| [SPARK-4708][MLLib] Make k-mean runs two/three times faster with dense/sparse sample |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-03 19:01:56 +0800 |
| Commit: 7fc49ed, github.com/apache/spark/pull/3565 |
| |
| [SPARK-4710] [mllib] Eliminate MLlib compilation warnings |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-12-03 18:50:03 +0800 |
| Commit: 4ac2151, github.com/apache/spark/pull/3568 |
| |
| [SPARK-4397][Core] Change the 'since' value of '@deprecated' to '1.3.0' |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-03 02:05:17 -0800 |
| Commit: 8af551f, github.com/apache/spark/pull/3573 |
| |
| [SPARK-4672][Core]Checkpoint() should clear f to shorten the serialization chain |
| JerryLead <JerryLead@163.com>, Lijie Xu <csxulijie@gmail.com> |
| 2014-12-02 23:53:29 -0800 |
| Commit: 77be8b9, github.com/apache/spark/pull/3545 |
| |
| [SPARK-4672][GraphX]Non-transient PartitionsRDDs will lead to StackOverflow error |
| JerryLead <JerryLead@163.com>, Lijie Xu <csxulijie@gmail.com> |
| 2014-12-02 17:14:11 -0800 |
| Commit: 17c162f, github.com/apache/spark/pull/3544 |
| |
| [SPARK-4672][GraphX]Perform checkpoint() on PartitionsRDD to shorten the lineage |
| JerryLead <JerryLead@163.com>, Lijie Xu <csxulijie@gmail.com> |
| 2014-12-02 17:08:02 -0800 |
| Commit: fc0a147, github.com/apache/spark/pull/3549 |
| |
| [Release] Translate unknown author names automatically |
| Andrew Or <andrew@databricks.com> |
| 2014-12-02 16:36:12 -0800 |
| Commit: 5da21f0 |
| |
| Minor nit style cleanup in GraphX. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-02 14:40:26 -0800 |
| Commit: 2d4f6e7 |
| |
| [SPARK-4695][SQL] Get result using executeCollect |
| wangfei <wangfei1@huawei.com> |
| 2014-12-02 14:30:44 -0800 |
| Commit: 3ae0cda, github.com/apache/spark/pull/3547 |
| |
| [SPARK-4670] [SQL] wrong symbol for bitwise not |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-02 14:25:12 -0800 |
| Commit: 1f5ddf1, github.com/apache/spark/pull/3528 |
| |
| [SPARK-4593][SQL] Return null when denominator is 0 |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-02 14:21:12 -0800 |
| Commit: f6df609, github.com/apache/spark/pull/3443 |
| |
| [SPARK-4676][SQL] JavaSchemaRDD.schema may throw NullType MatchError if sql has null |
| YanTangZhai <hakeemzhai@tencent.com>, yantangzhai <tyz0303@163.com>, Michael Armbrust <michael@databricks.com> |
| 2014-12-02 14:12:48 -0800 |
| Commit: 1066427, github.com/apache/spark/pull/3538 |
| |
| [SPARK-4663][sql]add finally to avoid resource leak |
| baishuo <vc_java@hotmail.com> |
| 2014-12-02 12:12:03 -0800 |
| Commit: 69b6fed, github.com/apache/spark/pull/3526 |
| |
| [SPARK-4536][SQL] Add sqrt and abs to Spark SQL DSL |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-02 12:07:52 -0800 |
| Commit: e75e04f, github.com/apache/spark/pull/3401 |
| |
| Indent license header properly for interfaces.scala. |
| Reynold Xin <rxin@databricks.com> |
| 2014-12-02 11:59:15 -0800 |
| Commit: b1f8fe3, github.com/apache/spark/pull/3552 |
| |
| [SPARK-4686] Link to allowed master URLs is broken |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-12-02 09:06:02 -0800 |
| Commit: d9a148b, github.com/apache/spark/pull/3542 |
| |
| [SPARK-4397][Core] Cleanup 'import SparkContext._' in core |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-02 00:18:41 -0800 |
| Commit: 6dfe38a, github.com/apache/spark/pull/3530 |
| |
| [SPARK-4611][MLlib] Implement the efficient vector norm |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-12-02 11:40:43 +0800 |
| Commit: 64f3175, github.com/apache/spark/pull/3462 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-12-01 17:27:14 -0800 |
| Commit: b0a46d8, github.com/apache/spark/pull/1612 |
| |
| [SPARK-4268][SQL] Use #::: to get benefit from Stream in SqlLexical.allCaseVersions |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-01 16:39:54 -0800 |
| Commit: d3e02dd, github.com/apache/spark/pull/3132 |
| |
| [SPARK-4529] [SQL] support view with column alias |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-01 16:08:51 -0800 |
| Commit: 4df60a8, github.com/apache/spark/pull/3396 |
| |
| [SQL][DOC] Date type in SQL programming guide |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-12-01 14:03:57 -0800 |
| Commit: 5edbcbf, github.com/apache/spark/pull/3535 |
| |
| [SQL] Minor fix for doc and comment |
| wangfei <wangfei1@huawei.com> |
| 2014-12-01 14:02:02 -0800 |
| Commit: 7b79957, github.com/apache/spark/pull/3533 |
| |
| [SPARK-4658][SQL] Code documentation issue in DDL of datasource API |
| ravipesala <ravindra.pesala@huawei.com> |
| 2014-12-01 13:31:27 -0800 |
| Commit: bc35381, github.com/apache/spark/pull/3516 |
| |
| [SPARK-4650][SQL] Supporting multi column support in countDistinct function like count(distinct c1,c2..) in Spark SQL |
| ravipesala <ravindra.pesala@huawei.com>, Michael Armbrust <michael@databricks.com> |
| 2014-12-01 13:26:44 -0800 |
| Commit: 6a9ff19, github.com/apache/spark/pull/3511 |
| |
| [SPARK-4358][SQL] Let BigDecimal do checking type compatibility |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-12-01 13:17:56 -0800 |
| Commit: b57365a, github.com/apache/spark/pull/3208 |
| |
| [SQL] add @group tab in limit() and count() |
| Jacky Li <jacky.likun@gmail.com> |
| 2014-12-01 13:12:30 -0800 |
| Commit: bafee67, github.com/apache/spark/pull/3458 |
| |
| [SPARK-4258][SQL][DOC] Documents spark.sql.parquet.filterPushdown |
| Cheng Lian <lian@databricks.com> |
| 2014-12-01 13:09:51 -0800 |
| Commit: 5db8dca, github.com/apache/spark/pull/3440 |
| |
| Documentation: add description for repartitionAndSortWithinPartitions |
| Madhu Siddalingaiah <madhu@madhu.com> |
| 2014-12-01 08:45:34 -0800 |
| Commit: 2b233f5, github.com/apache/spark/pull/3390 |
| |
| [SPARK-4661][Core] Minor code and docs cleanup |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-01 00:35:01 -0800 |
| Commit: 30a86ac, github.com/apache/spark/pull/3521 |
| |
| [SPARK-4664][Core] Throw an exception when spark.akka.frameSize > 2047 |
| zsxwing <zsxwing@gmail.com> |
| 2014-12-01 00:32:54 -0800 |
| Commit: 1d238f2, github.com/apache/spark/pull/3527 |
| |
| SPARK-2192 [BUILD] Examples Data Not in Binary Distribution |
| Sean Owen <sowen@cloudera.com> |
| 2014-12-01 16:31:04 +0800 |
| Commit: 6384f42, github.com/apache/spark/pull/3480 |
| |
| Fix wrong file name pattern in .gitignore |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-12-01 00:29:28 -0800 |
| Commit: 97eb6d7, github.com/apache/spark/pull/3529 |
| |
| [SPARK-4632] version update |
| Prabeesh K <prabsmails@gmail.com> |
| 2014-11-30 20:51:53 -0800 |
| Commit: 5e7a6dc, github.com/apache/spark/pull/3495 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-30 20:51:13 -0800 |
| Commit: 06dc1b1, github.com/apache/spark/pull/2915 |
| |
| [DOC] Fixes formatting typo in SQL programming guide |
| Cheng Lian <lian@databricks.com> |
| 2014-11-30 19:04:07 -0800 |
| Commit: 2a4d389, github.com/apache/spark/pull/3498 |
| |
| [SPARK-4656][Doc] Typo in Programming Guide markdown |
| lewuathe <lewuathe@me.com> |
| 2014-11-30 17:18:50 -0800 |
| Commit: a217ec5, github.com/apache/spark/pull/3412 |
| |
| [SPARK-4623]Add the some error infomation if using spark-sql in yarn-cluster mode |
| carlmartin <carlmartinmax@gmail.com>, huangzhaowei <carlmartinmax@gmail.com> |
| 2014-11-30 16:19:41 -0800 |
| Commit: aea7a99, github.com/apache/spark/pull/3479 |
| |
| SPARK-2143 [WEB UI] Add Spark version to UI footer |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-30 11:40:08 -0800 |
| Commit: 048ecca, github.com/apache/spark/pull/3410 |
| |
| [DOCS][BUILD] Add instruction to use change-version-to-2.11.sh in 'Building for Scala 2.11'. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-30 00:10:31 -0500 |
| Commit: 0fcd24c, github.com/apache/spark/pull/3361 |
| |
| SPARK-4507: PR merge script should support closing multiple JIRA tickets |
| Takayuki Hasegawa <takayuki.hasegawa0311@gmail.com> |
| 2014-11-29 23:12:10 -0500 |
| Commit: 4316a7b, github.com/apache/spark/pull/3428 |
| |
| [SPARK-4505][Core] Add a ClassTag parameter to CompactBuffer[T] |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-29 20:23:08 -0500 |
| Commit: c062224, github.com/apache/spark/pull/3378 |
| |
| [SPARK-4057] Use -agentlib instead of -Xdebug in sbt-launch-lib.bash for debugging |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-29 20:14:14 -0500 |
| Commit: 938dc14, github.com/apache/spark/pull/2904 |
| |
| Include the key name when failing on an invalid value. |
| Stephen Haberman <stephen@exigencecorp.com> |
| 2014-11-29 20:12:05 -0500 |
| Commit: 95290bf, github.com/apache/spark/pull/3514 |
| |
| [SPARK-3398] [SPARK-4325] [EC2] Use EC2 status checks. |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-11-29 00:31:06 -0800 |
| Commit: 317e114, github.com/apache/spark/pull/3195 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-29 00:24:35 -0500 |
| Commit: 047ff57, github.com/apache/spark/pull/3451 |
| |
| [SPARK-4597] Use proper exception and reset variable in Utils.createTempDir() |
| Liang-Chi Hsieh <viirya@gmail.com> |
| 2014-11-28 18:04:05 -0800 |
| Commit: 49fe879, github.com/apache/spark/pull/3449 |
| |
| SPARK-1450 [EC2] Specify the default zone in the EC2 script help |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-28 17:43:38 -0500 |
| Commit: 48223d8, github.com/apache/spark/pull/3454 |
| |
| [SPARK-4584] [yarn] Remove security manager from Yarn AM. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2014-11-28 15:15:30 -0500 |
| Commit: 915f8ee, github.com/apache/spark/pull/3484 |
| |
| [SPARK-4193][BUILD] Disable doclint in Java 8 to prevent from build error. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-28 13:00:15 -0500 |
| Commit: e464f0a, github.com/apache/spark/pull/3058 |
| |
| [SPARK-4643] [Build] Remove unneeded staging repositories from build |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-11-28 12:41:38 -0500 |
| Commit: 53ed7f1, github.com/apache/spark/pull/3504 |
| |
| Delete unnecessary function |
| KaiXinXiaoLei <huleilei1@huawei.com> |
| 2014-11-28 12:34:07 -0500 |
| Commit: 052e658, github.com/apache/spark/pull/3224 |
| |
| [SPARK-4645][SQL] Disables asynchronous execution in Hive 0.13.1 HiveThriftServer2 |
| Cheng Lian <lian@databricks.com> |
| 2014-11-28 11:42:40 -0500 |
| Commit: 5b99bf2, github.com/apache/spark/pull/3506 |
| |
| [SPARK-4619][Storage]delete redundant time suffix |
| maji2014 <maji3@asiainfo.com> |
| 2014-11-28 00:36:22 -0800 |
| Commit: ceb6281, github.com/apache/spark/pull/3475 |
| |
| [SPARK-4613][Core] Java API for JdbcRDD |
| Cheng Lian <lian@databricks.com> |
| 2014-11-27 18:01:14 -0800 |
| Commit: 120a350, github.com/apache/spark/pull/3478 |
| |
| [SPARK-4626] Kill a task only if the executorId is (still) registered with the scheduler |
| roxchkplusony <roxchkplusony@gmail.com> |
| 2014-11-27 15:54:40 -0800 |
| Commit: 84376d3, github.com/apache/spark/pull/3483 |
| |
| SPARK-4170 [CORE] Closure problems when running Scala app that "extends App" |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-27 09:03:17 -0800 |
| Commit: 5d7fe17, github.com/apache/spark/pull/3497 |
| |
| [Release] Automate generation of contributors list |
| Andrew Or <andrew@databricks.com> |
| 2014-11-26 23:16:23 -0800 |
| Commit: c86e9bc |
| |
| [SPARK-732][SPARK-3628][CORE][RESUBMIT] eliminate duplicate update on accmulator |
| CodingCat <zhunansjtu@gmail.com> |
| 2014-11-26 16:52:04 -0800 |
| Commit: 5af53ad, github.com/apache/spark/pull/2524 |
| |
| [SPARK-4614][MLLIB] Slight API changes in Matrix and Matrices |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-26 08:22:50 -0800 |
| Commit: 561d31d, github.com/apache/spark/pull/3468 |
| |
| Removing confusing TripletFields |
| Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com> |
| 2014-11-26 00:55:28 -0800 |
| Commit: 288ce58, github.com/apache/spark/pull/3472 |
| |
| [SPARK-4612] Reduce task latency and increase scheduling throughput by making configuration initialization lazy |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-25 23:15:58 -0800 |
| Commit: e7f4d25, github.com/apache/spark/pull/3463 |
| |
| [SPARK-4516] Avoid allocating Netty PooledByteBufAllocators unnecessarily |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-26 00:32:45 -0500 |
| Commit: 346bc17, github.com/apache/spark/pull/3465 |
| |
| [SPARK-4516] Cap default number of Netty threads at 8 |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-25 23:57:04 -0500 |
| Commit: f5f2d27, github.com/apache/spark/pull/3469 |
| |
| [SPARK-4604][MLLIB] make MatrixFactorizationModel public |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-25 20:11:40 -0800 |
| Commit: b5fb141, github.com/apache/spark/pull/3459 |
| |
| [HOTFIX]: Adding back without-hive dist |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-25 23:10:19 -0500 |
| Commit: 4d95526 |
| |
| [SPARK-4583] [mllib] LogLoss for GradientBoostedTrees fix + doc updates |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-11-25 20:10:15 -0800 |
| Commit: c251fd7, github.com/apache/spark/pull/3439 |
| |
| [Spark-4509] Revert EC2 tag-based cluster membership patch |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-25 16:07:09 -0800 |
| Commit: 7eba0fb, github.com/apache/spark/pull/3453 |
| |
| Fix SPARK-4471: blockManagerIdFromJson function throws exception while B... |
| hushan[č”ē] <hushan@xiaomi.com> |
| 2014-11-25 15:51:08 -0800 |
| Commit: 9bdf5da, github.com/apache/spark/pull/3340 |
| |
| [SPARK-4546] Improve HistoryServer first time user experience |
| Andrew Or <andrew@databricks.com> |
| 2014-11-25 15:48:02 -0800 |
| Commit: 9afcbe4, github.com/apache/spark/pull/3411 |
| |
| [SPARK-4592] Avoid duplicate worker registrations in standalone mode |
| Andrew Or <andrew@databricks.com> |
| 2014-11-25 15:46:26 -0800 |
| Commit: 1b2ab1c, github.com/apache/spark/pull/3447 |
| |
| [SPARK-4196][SPARK-4602][Streaming] Fix serialization issue in PairDStreamFunctions.saveAsNewAPIHadoopFiles |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-25 14:16:27 -0800 |
| Commit: 8838ad7, github.com/apache/spark/pull/3457 |
| |
| [SPARK-4581][MLlib] Refactorize StandardScaler to improve the transformation performance |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-11-25 11:07:11 -0800 |
| Commit: bf1a6aa, github.com/apache/spark/pull/3435 |
| |
| [SPARK-4601][Streaming] Set correct call site for streaming jobs so that it is displayed correctly on the Spark UI |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-25 06:50:36 -0800 |
| Commit: 69cd53e, github.com/apache/spark/pull/3455 |
| |
| [SPARK-4344][DOCS] adding documentation on spark.yarn.user.classpath.first |
| arahuja <aahuja11@gmail.com> |
| 2014-11-25 08:23:41 -0600 |
| Commit: d240760, github.com/apache/spark/pull/3209 |
| |
| [SPARK-4381][Streaming]Add warning log when user set spark.master to local in Spark Streaming and there's no job executed |
| jerryshao <saisai.shao@intel.com> |
| 2014-11-25 05:36:29 -0800 |
| Commit: fef27b2, github.com/apache/spark/pull/3244 |
| |
| [SPARK-4535][Streaming] Fix the error in comments |
| q00251598 <qiyadong@huawei.com> |
| 2014-11-25 04:01:56 -0800 |
| Commit: a51118a, github.com/apache/spark/pull/3400 |
| |
| [SPARK-4526][MLLIB]GradientDescent get a wrong gradient value according to the gradient formula. |
| GuoQiang Li <witgo@qq.com> |
| 2014-11-25 02:01:19 -0800 |
| Commit: f515f94, github.com/apache/spark/pull/3399 |
| |
| [SPARK-4596][MLLib] Refactorize Normalizer to make code cleaner |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-11-25 01:57:34 -0800 |
| Commit: 89f9122, github.com/apache/spark/pull/3446 |
| |
| [DOC][Build] Wrong cmd for build spark with apache hadoop 2.4.X and hive 12 |
| wangfei <wangfei1@huawei.com> |
| 2014-11-24 22:32:39 -0800 |
| Commit: 0fe54cf, github.com/apache/spark/pull/3335 |
| |
| [SQL] Compute timeTaken correctly |
| w00228970 <wangfei1@huawei.com> |
| 2014-11-24 21:17:24 -0800 |
| Commit: 723be60, github.com/apache/spark/pull/3423 |
| |
| [SPARK-4582][MLLIB] get raw vectors for further processing in Word2Vec |
| tkaessmann <tobias.kaessmanns24.com>, tkaessmann <tobias.kaessmann@s24.com> |
| 2014-11-24 19:58:01 -0800 |
| Commit: 9ce2bf3, github.com/apache/spark/pull/3309 |
| |
| [SPARK-4525] Mesos should decline unused offers |
| Patrick Wendell <pwendell@gmail.com>, Jongyoul Lee <jongyoul@gmail.com> |
| 2014-11-24 19:14:14 -0800 |
| Commit: f0afb62, github.com/apache/spark/pull/3436 |
| |
| Revert "[SPARK-4525] Mesos should decline unused offers" |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-24 19:16:53 -0800 |
| Commit: a68d442 |
| |
| [SPARK-4525] Mesos should decline unused offers |
| Patrick Wendell <pwendell@gmail.com>, Jongyoul Lee <jongyoul@gmail.com> |
| 2014-11-24 19:14:14 -0800 |
| Commit: b043c27, github.com/apache/spark/pull/3436 |
| |
| [SPARK-4266] [Web-UI] Reduce stage page load time. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-11-24 18:03:10 -0800 |
| Commit: d24d5bf, github.com/apache/spark/pull/3328 |
| |
| [SPARK-4548] []SPARK-4517] improve performance of python broadcast |
| Davies Liu <davies@databricks.com> |
| 2014-11-24 17:17:03 -0800 |
| Commit: 6cf5076, github.com/apache/spark/pull/3417 |
| |
| [SPARK-4578] fix asDict() with nested Row() |
| Davies Liu <davies@databricks.com> |
| 2014-11-24 16:41:23 -0800 |
| Commit: 050616b, github.com/apache/spark/pull/3434 |
| |
| [SPARK-4562] [MLlib] speedup vector |
| Davies Liu <davies@databricks.com> |
| 2014-11-24 16:37:14 -0800 |
| Commit: b660de7, github.com/apache/spark/pull/3420 |
| |
| [SPARK-4518][SPARK-4519][Streaming] Refactored file stream to prevent files from being processed multiple times |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-24 13:50:20 -0800 |
| Commit: cb0e9b0, github.com/apache/spark/pull/3419 |
| |
| [SPARK-4145] Web UI job pages |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-24 13:18:14 -0800 |
| Commit: 4a90276, github.com/apache/spark/pull/3009 |
| |
| [SPARK-4487][SQL] Fix attribute reference resolution error when using ORDER BY. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-24 12:54:37 -0800 |
| Commit: dd1c9cb, github.com/apache/spark/pull/3363 |
| |
| [SQL] Fix path in HiveFromSpark |
| scwf <wangfei1@huawei.com> |
| 2014-11-24 12:49:08 -0800 |
| Commit: b384119, github.com/apache/spark/pull/3415 |
| |
| [SQL] Fix comment in HiveShim |
| Daniel Darabos <darabos.daniel@gmail.com> |
| 2014-11-24 12:45:07 -0800 |
| Commit: d5834f0, github.com/apache/spark/pull/3432 |
| |
| [SPARK-4479][SQL] Avoids unnecessary defensive copies when sort based shuffle is on |
| Cheng Lian <lian@databricks.com> |
| 2014-11-24 12:43:45 -0800 |
| Commit: a6d7b61, github.com/apache/spark/pull/3422 |
| |
| SPARK-4457. Document how to build for Hadoop versions greater than 2.4 |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-24 13:28:48 -0600 |
| Commit: 29372b6, github.com/apache/spark/pull/3322 |
| |
| [SPARK-4377] Fixed serialization issue by switching to akka provided serializer. |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2014-11-22 14:05:38 -0800 |
| Commit: 9b2a3c6, github.com/apache/spark/pull/3402 |
| |
| [SPARK-4431][MLlib] Implement efficient foreachActive for dense and sparse vector |
| DB Tsai <dbtsai@alpinenow.com> |
| 2014-11-21 18:15:07 -0800 |
| Commit: b5d17ef, github.com/apache/spark/pull/3288 |
| |
| [SPARK-4531] [MLlib] cache serialized java object |
| Davies Liu <davies@databricks.com> |
| 2014-11-21 15:02:31 -0800 |
| Commit: ce95bd8, github.com/apache/spark/pull/3397 |
| |
| SPARK-4532: Fix bug in detection of Hive in Spark 1.2 |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-21 12:10:04 -0800 |
| Commit: a81918c, github.com/apache/spark/pull/3398 |
| |
| [SPARK-4397][Core] Reorganize 'implicit's to improve the API convenience |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-21 10:06:30 -0800 |
| Commit: 65b987c, github.com/apache/spark/pull/3262 |
| |
| [SPARK-4472][Shell] Print "Spark context available as sc." only when SparkContext is created... |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-21 00:42:43 -0800 |
| Commit: f1069b8, github.com/apache/spark/pull/3341 |
| |
| [Doc][GraphX] Remove unused png files. |
| Reynold Xin <rxin@databricks.com> |
| 2014-11-21 00:30:58 -0800 |
| Commit: 28fdc6f |
| |
| [Doc][GraphX] Remove Motivation section and did some minor update. |
| Reynold Xin <rxin@databricks.com> |
| 2014-11-21 00:29:02 -0800 |
| Commit: b97070e |
| |
| [SPARK-4522][SQL] Parse schema with missing metadata. |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-20 20:34:43 -0800 |
| Commit: 90a6a46, github.com/apache/spark/pull/3392 |
| |
| add Sphinx as a dependency of building docs |
| Davies Liu <davies@databricks.com> |
| 2014-11-20 19:12:45 -0800 |
| Commit: 8cd6eea, github.com/apache/spark/pull/3388 |
| |
| [SPARK-4413][SQL] Parquet support through datasource API |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-20 18:31:02 -0800 |
| Commit: 02ec058, github.com/apache/spark/pull/3269 |
| |
| [SPARK-4244] [SQL] Support Hive Generic UDFs with constant object inspector parameters |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-20 16:50:59 -0800 |
| Commit: 84d79ee, github.com/apache/spark/pull/3109 |
| |
| [SPARK-4477] [PySpark] remove numpy from RDDSampler |
| Davies Liu <davies@databricks.com>, Xiangrui Meng <meng@databricks.com> |
| 2014-11-20 16:40:25 -0800 |
| Commit: d39f2e9, github.com/apache/spark/pull/3351 |
| |
| [SQL] fix function description mistake |
| Jacky Li <jacky.likun@gmail.com> |
| 2014-11-20 15:48:36 -0800 |
| Commit: ad5f1f3, github.com/apache/spark/pull/3344 |
| |
| [SPARK-2918] [SQL] Support the CTAS in EXPLAIN command |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-20 15:46:00 -0800 |
| Commit: 6aa0fc9, github.com/apache/spark/pull/3357 |
| |
| [SPARK-4318][SQL] Fix empty sum distinct. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-20 15:41:24 -0800 |
| Commit: 2c2e7a4, github.com/apache/spark/pull/3184 |
| |
| [SPARK-4513][SQL] Support relational operator '<=>' in Spark SQL |
| ravipesala <ravindra.pesala@huawei.com> |
| 2014-11-20 15:34:03 -0800 |
| Commit: 98e9419, github.com/apache/spark/pull/3387 |
| |
| [SPARK-4439] [MLlib] add python api for random forest |
| Davies Liu <davies@databricks.com> |
| 2014-11-20 15:31:28 -0800 |
| Commit: 1c53a5d, github.com/apache/spark/pull/3320 |
| |
| [SPARK-4228][SQL] SchemaRDD to JSON |
| Dan McClary <dan.mcclary@gmail.com> |
| 2014-11-20 13:36:50 -0800 |
| Commit: b8e6886, github.com/apache/spark/pull/3213 |
| |
| [SPARK-3938][SQL] Names in-memory columnar RDD with corresponding table name |
| Cheng Lian <lian@databricks.com> |
| 2014-11-20 13:12:24 -0800 |
| Commit: abf2918, github.com/apache/spark/pull/3383 |
| |
| [SPARK-4486][MLLIB] Improve GradientBoosting APIs and doc |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-20 00:48:59 -0800 |
| Commit: 15cacc8, github.com/apache/spark/pull/3374 |
| |
| [SPARK-4446] [SPARK CORE] |
| Leolh <leosandylh@gmail.com> |
| 2014-11-19 18:18:55 -0800 |
| Commit: e216ffa, github.com/apache/spark/pull/3306 |
| |
| [SPARK-4480] Avoid many small spills in external data structures |
| Andrew Or <andrew@databricks.com> |
| 2014-11-19 18:07:27 -0800 |
| Commit: 0eb4a7f, github.com/apache/spark/pull/3353 |
| |
| [Spark-4484] Treat maxResultSize as unlimited when set to 0; improve error message |
| Nishkam Ravi <nravi@cloudera.com>, nravi <nravi@c1704.halxg.cloudera.com>, nishkamravi2 <nishkamravi@gmail.com> |
| 2014-11-19 17:23:42 -0800 |
| Commit: 73fedf5, github.com/apache/spark/pull/3360 |
| |
| [SPARK-4478] Keep totalRegisteredExecutors up-to-date |
| Akshat Aranya <aaranya@quantcast.com> |
| 2014-11-19 17:20:20 -0800 |
| Commit: 9ccc53c, github.com/apache/spark/pull/3373 |
| |
| Updating GraphX programming guide and documentation |
| Joseph E. Gonzalez <joseph.e.gonzalez@gmail.com> |
| 2014-11-19 16:53:33 -0800 |
| Commit: 377b068, github.com/apache/spark/pull/3359 |
| |
| [SPARK-4495] Fix memory leak in JobProgressListener |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-19 16:50:21 -0800 |
| Commit: 04d462f, github.com/apache/spark/pull/3372 |
| |
| [SPARK-4294][Streaming] UnionDStream stream should express the requirements in the same way as TransformedDStream |
| Yadong Qi <qiyadong2010@gmail.com> |
| 2014-11-19 15:53:06 -0800 |
| Commit: c3002c4, github.com/apache/spark/pull/3152 |
| |
| [SPARK-4384] [PySpark] improve sort spilling |
| Davies Liu <davies@databricks.com> |
| 2014-11-19 15:45:37 -0800 |
| Commit: 73c8ea8, github.com/apache/spark/pull/3252 |
| |
| [SPARK-4429][BUILD] Build for Scala 2.11 using sbt fails. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-19 14:40:21 -0800 |
| Commit: f9adda9, github.com/apache/spark/pull/3342 |
| |
| [DOC][PySpark][Streaming] Fix docstring for sphinx |
| Ken Takagiwa <ugw.gi.world@gmail.com> |
| 2014-11-19 14:23:18 -0800 |
| Commit: 9b7bbce, github.com/apache/spark/pull/3311 |
| |
| SPARK-3962 Marked scope as provided for external projects. |
| Prashant Sharma <prashant.s@imaginea.com>, Prashant Sharma <scrapcodes@gmail.com> |
| 2014-11-19 14:18:10 -0800 |
| Commit: 1c93841, github.com/apache/spark/pull/2959 |
| |
| [HOT FIX] MiMa tests are broken |
| Andrew Or <andrew@databricks.com> |
| 2014-11-19 14:03:44 -0800 |
| Commit: 0df02ca, github.com/apache/spark/pull/3371 |
| |
| [SPARK-4481][Streaming][Doc] Fix the wrong description of updateFunc |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-19 13:17:15 -0800 |
| Commit: 3bf7cee, github.com/apache/spark/pull/3356 |
| |
| [SPARK-4482][Streaming] Disable ReceivedBlockTracker's write ahead log by default |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-19 13:06:48 -0800 |
| Commit: 22fc4e7, github.com/apache/spark/pull/3358 |
| |
| [SPARK-4470] Validate number of threads in local mode |
| Kenichi Maehashi <webmaster@kenichimaehashi.com> |
| 2014-11-19 12:11:09 -0800 |
| Commit: eacc788, github.com/apache/spark/pull/3337 |
| |
| [SPARK-4467] fix elements read count for ExtrenalSorter |
| Tianshuo Deng <tdeng@twitter.com> |
| 2014-11-19 10:01:09 -0800 |
| Commit: d75579d, github.com/apache/spark/pull/3302 |
| |
| SPARK-4455 Exclude dependency on hbase-annotations module |
| tedyu <yuzhihong@gmail.com> |
| 2014-11-19 00:55:39 -0800 |
| Commit: 5f5ac2d, github.com/apache/spark/pull/3286 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-19 00:27:31 -0800 |
| Commit: 8327df6, github.com/apache/spark/pull/2777 |
| |
| [Spark-4432]close InStream after the block is accessed |
| Mingfei <mingfei.shi@intel.com> |
| 2014-11-18 22:17:06 -0800 |
| Commit: 165cec9, github.com/apache/spark/pull/3290 |
| |
| [SPARK-4441] Close Tachyon client when TachyonBlockManager is shutdown |
| Mingfei <mingfei.shi@intel.com> |
| 2014-11-18 22:16:36 -0800 |
| Commit: 67e9876, github.com/apache/spark/pull/3299 |
| |
| Bumping version to 1.3.0-SNAPSHOT. |
| Marcelo Vanzin <vanzin@cloudera.com> |
| 2014-11-18 21:24:18 -0800 |
| Commit: 397d3aa, github.com/apache/spark/pull/3277 |
| |
| [SPARK-4468][SQL] Fixes Parquet filter creation for inequality predicates with literals on the left hand side |
| Cheng Lian <lian@databricks.com> |
| 2014-11-18 17:41:54 -0800 |
| Commit: 423baea, github.com/apache/spark/pull/3334 |
| |
| [SPARK-4327] [PySpark] Python API for RDD.randomSplit() |
| Davies Liu <davies@databricks.com> |
| 2014-11-18 16:37:35 -0800 |
| Commit: 7f22fa8, github.com/apache/spark/pull/3193 |
| |
| [SPARK-4433] fix a racing condition in zipWithIndex |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-18 16:25:44 -0800 |
| Commit: bb46046, github.com/apache/spark/pull/3291 |
| |
| [SPARK-3721] [PySpark] broadcast objects larger than 2G |
| Davies Liu <davies@databricks.com>, Davies Liu <davies.liu@gmail.com> |
| 2014-11-18 16:17:51 -0800 |
| Commit: 4a377af, github.com/apache/spark/pull/2659 |
| |
| [SPARK-4306] [MLlib] Python API for LogisticRegressionWithLBFGS |
| Davies Liu <davies@databricks.com> |
| 2014-11-18 15:57:33 -0800 |
| Commit: d2e2951, github.com/apache/spark/pull/3307 |
| |
| [SPARK-4463] Add (de)select all button for add'l metrics. |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-11-18 15:01:06 -0800 |
| Commit: 010bc86, github.com/apache/spark/pull/3331 |
| |
| [SPARK-4017] show progress bar in console |
| Davies Liu <davies@databricks.com> |
| 2014-11-18 13:37:21 -0800 |
| Commit: e34f38f, github.com/apache/spark/pull/3029 |
| |
| [SPARK-4404] remove sys.exit() in shutdown hook |
| Davies Liu <davies@databricks.com> |
| 2014-11-18 13:11:38 -0800 |
| Commit: 80f3177, github.com/apache/spark/pull/3289 |
| |
| [SPARK-4075][SPARK-4434] Fix the URI validation logic for Application Jar name. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-18 12:17:33 -0800 |
| Commit: bfebfd8, github.com/apache/spark/pull/3326 |
| |
| [SQL] Support partitioned parquet tables that have the key in both the directory and the file |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-18 12:13:23 -0800 |
| Commit: 90d72ec, github.com/apache/spark/pull/3272 |
| |
| [SPARK-4396] allow lookup by index in Python's Rating |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-18 10:35:29 -0800 |
| Commit: b54c6ab, github.com/apache/spark/pull/3261 |
| |
| [SPARK-4435] [MLlib] [PySpark] improve classification |
| Davies Liu <davies@databricks.com> |
| 2014-11-18 10:11:13 -0800 |
| Commit: 8fbf72b, github.com/apache/spark/pull/3305 |
| |
| ALS implicit: added missing parameter alpha in doc string |
| Felix Maximilian Mƶller <felixmaximilian.moeller@immobilienscout24.de> |
| 2014-11-18 10:08:24 -0800 |
| Commit: cedc3b5, github.com/apache/spark/pull/3343 |
| |
| SPARK-4466: Provide support for publishing Scala 2.11 artifacts to Maven |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-17 21:07:50 -0800 |
| Commit: c6e0c2a, github.com/apache/spark/pull/3332 |
| |
| [SPARK-4453][SPARK-4213][SQL] Simplifies Parquet filter generation code |
| Cheng Lian <lian@databricks.com> |
| 2014-11-17 16:55:12 -0800 |
| Commit: 36b0956, github.com/apache/spark/pull/3317 |
| |
| [SPARK-4448] [SQL] unwrap for the ConstantObjectInspector |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-17 16:35:49 -0800 |
| Commit: ef7c464, github.com/apache/spark/pull/3308 |
| |
| [SPARK-4443][SQL] Fix statistics for external table in spark sql hive |
| w00228970 <wangfei1@huawei.com> |
| 2014-11-17 16:33:50 -0800 |
| Commit: 42389b1, github.com/apache/spark/pull/3304 |
| |
| [SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes for complex types |
| Cheng Lian <lian@databricks.com> |
| 2014-11-17 16:31:05 -0800 |
| Commit: 6b7f2f7, github.com/apache/spark/pull/3298 |
| |
| [SQL] Construct the MutableRow from an Array |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-17 16:29:52 -0800 |
| Commit: 69e858c, github.com/apache/spark/pull/3217 |
| |
| [SPARK-4425][SQL] Handle NaN or Infinity cast to Timestamp correctly. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-17 16:28:07 -0800 |
| Commit: 566c791, github.com/apache/spark/pull/3283 |
| |
| [SPARK-4420][SQL] Change nullability of Cast from DoubleType/FloatType to DecimalType. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-17 16:26:48 -0800 |
| Commit: 3a81a1c, github.com/apache/spark/pull/3278 |
| |
| [SQL] Makes conjunction pushdown more aggressive for in-memory table |
| Cheng Lian <lian@databricks.com> |
| 2014-11-17 15:33:13 -0800 |
| Commit: 5ce7dae, github.com/apache/spark/pull/3318 |
| |
| [SPARK-4180] [Core] Prevent creation of multiple active SparkContexts |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-17 12:48:18 -0800 |
| Commit: 0f3ceb5, github.com/apache/spark/pull/3121 |
| |
| [DOCS][SQL] Fix broken link to Row class scaladoc |
| Andy Konwinski <andykonwinski@gmail.com> |
| 2014-11-17 11:52:23 -0800 |
| Commit: cec1116, github.com/apache/spark/pull/3323 |
| |
| Revert "[SPARK-4075] [Deploy] Jar url validation is not enough for Jar file" |
| Andrew Or <andrew@databricks.com> |
| 2014-11-17 11:24:28 -0800 |
| Commit: dbb9da5 |
| |
| [SPARK-4444] Drop VD type parameter from EdgeRDD |
| Ankur Dave <ankurdave@gmail.com> |
| 2014-11-17 11:06:31 -0800 |
| Commit: 9ac2bb1, github.com/apache/spark/pull/3303 |
| |
| SPARK-2811 upgrade algebird to 0.8.1 |
| Adam Pingel <adam@axle-lang.org> |
| 2014-11-17 10:47:29 -0800 |
| Commit: e7690ed, github.com/apache/spark/pull/3282 |
| |
| SPARK-4445, Don't display storage level in toDebugString unless RDD is persisted. |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2014-11-17 10:40:33 -0800 |
| Commit: 5c92d47, github.com/apache/spark/pull/3310 |
| |
| [SPARK-4410][SQL] Add support for external sort |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-16 21:55:57 -0800 |
| Commit: 64c6b9b, github.com/apache/spark/pull/3268 |
| |
| [SPARK-4422][MLLIB]In some cases, Vectors.fromBreeze get wrong results. |
| GuoQiang Li <witgo@qq.com> |
| 2014-11-16 21:31:51 -0800 |
| Commit: 5168c6c, github.com/apache/spark/pull/3281 |
| |
| Revert "[SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes for complex types" |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-16 15:05:04 -0800 |
| Commit: 45ce327, github.com/apache/spark/pull/3292 |
| |
| [SPARK-4309][SPARK-4407][SQL] Date type support for Thrift server, and fixes for complex types |
| Cheng Lian <lian@databricks.com> |
| 2014-11-16 14:26:41 -0800 |
| Commit: cb6bd83, github.com/apache/spark/pull/3178 |
| |
| [SPARK-4393] Fix memory leak in ConnectionManager ACK timeout TimerTasks; use HashedWheelTimer |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-16 00:44:15 -0800 |
| Commit: 7850e0c, github.com/apache/spark/pull/3259 |
| |
| [SPARK-4426][SQL][Minor] The symbol of BitwiseOr is wrong, should not be '&' |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-15 22:23:47 -0800 |
| Commit: 84468b2, github.com/apache/spark/pull/3284 |
| |
| [SPARK-4419] Upgrade snappy-java to 1.1.1.6 |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-15 22:22:34 -0800 |
| Commit: 7d8e152, github.com/apache/spark/pull/3287 |
| |
| [SPARK-2321] Several progress API improvements / refactorings |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-14 23:46:25 -0800 |
| Commit: 40eb8b6, github.com/apache/spark/pull/3197 |
| |
| Added contains(key) to Metadata |
| kai <kaizeng@eecs.berkeley.edu> |
| 2014-11-14 23:44:23 -0800 |
| Commit: cbddac2, github.com/apache/spark/pull/3273 |
| |
| [SPARK-4260] Httpbroadcast should set connection timeout. |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-14 22:36:56 -0800 |
| Commit: 60969b0, github.com/apache/spark/pull/3122 |
| |
| [SPARK-4363][Doc] Update the Broadcast example |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-14 22:28:48 -0800 |
| Commit: 861223e, github.com/apache/spark/pull/3226 |
| |
| [SPARK-4379][Core] Change Exception to SparkException in checkpoint |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-14 22:25:41 -0800 |
| Commit: dba1405, github.com/apache/spark/pull/3241 |
| |
| [SPARK-4415] [PySpark] JVM should exit after Python exit |
| Davies Liu <davies@databricks.com> |
| 2014-11-14 20:13:46 -0800 |
| Commit: 7fe08b4, github.com/apache/spark/pull/3274 |
| |
| [SPARK-4404]SparkSubmitDriverBootstrapper should stop after its SparkSubmit sub-proc... |
| WangTao <barneystinson@aliyun.com>, WangTaoTheTonic <barneystinson@aliyun.com> |
| 2014-11-14 20:11:51 -0800 |
| Commit: 303a4e4, github.com/apache/spark/pull/3266 |
| |
| SPARK-4214. With dynamic allocation, avoid outstanding requests for more... |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-14 15:51:05 -0800 |
| Commit: ad42b28, github.com/apache/spark/pull/3204 |
| |
| [SPARK-4412][SQL] Fix Spark's control of Parquet logging. |
| Jim Carroll <jim@dontcallme.com> |
| 2014-11-14 15:33:21 -0800 |
| Commit: 37482ce, github.com/apache/spark/pull/3271 |
| |
| [SPARK-4365][SQL] Remove unnecessary filter call on records returned from parquet library |
| Yash Datta <Yash.Datta@guavus.com> |
| 2014-11-14 15:16:36 -0800 |
| Commit: 63ca3af, github.com/apache/spark/pull/3229 |
| |
| [SPARK-4386] Improve performance when writing Parquet files. |
| Jim Carroll <jim@dontcallme.com> |
| 2014-11-14 15:11:53 -0800 |
| Commit: f76b968, github.com/apache/spark/pull/3254 |
| |
| [SPARK-4322][SQL] Enables struct fields as sub expressions of grouping fields |
| Cheng Lian <lian@databricks.com> |
| 2014-11-14 15:09:36 -0800 |
| Commit: 0c7b66b, github.com/apache/spark/pull/3248 |
| |
| [SQL] Don't shuffle code generated rows |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-14 15:03:23 -0800 |
| Commit: 4b4b50c, github.com/apache/spark/pull/3263 |
| |
| [SQL] Minor cleanup of comments, errors and override. |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-14 15:00:42 -0800 |
| Commit: f805025, github.com/apache/spark/pull/3257 |
| |
| [SPARK-4391][SQL] Configure parquet filters using SQLConf |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-14 14:59:35 -0800 |
| Commit: e47c387, github.com/apache/spark/pull/3258 |
| |
| [SPARK-4390][SQL] Handle NaN cast to decimal correctly |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-14 14:56:57 -0800 |
| Commit: a0300ea, github.com/apache/spark/pull/3256 |
| |
| [SPARK-4062][Streaming]Add ReliableKafkaReceiver in Spark Streaming Kafka connector |
| jerryshao <saisai.shao@intel.com>, Tathagata Das <tathagata.das1565@gmail.com>, Saisai Shao <saisai.shao@intel.com> |
| 2014-11-14 14:33:37 -0800 |
| Commit: 5930f64, github.com/apache/spark/pull/2991 |
| |
| [SPARK-4333][SQL] Correctly log number of iterations in RuleExecutor |
| DoingDone9 <799203320@qq.com> |
| 2014-11-14 14:28:06 -0800 |
| Commit: 0cbdb01, github.com/apache/spark/pull/3180 |
| |
| SPARK-4375. no longer require -Pscala-2.10 |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-14 14:21:57 -0800 |
| Commit: f5f757e, github.com/apache/spark/pull/3239 |
| |
| [SPARK-4245][SQL] Fix containsNull of the result ArrayType of CreateArray expression. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-14 14:21:16 -0800 |
| Commit: bbd8f5b, github.com/apache/spark/pull/3110 |
| |
| [SPARK-4239] [SQL] support view in HiveQl |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-11-14 13:51:20 -0800 |
| Commit: ade72c4, github.com/apache/spark/pull/3131 |
| |
| Update failed assert text to match code in SizeEstimatorSuite |
| Jeff Hammerbacher <jeff.hammerbacher@gmail.com> |
| 2014-11-14 13:37:48 -0800 |
| Commit: c258db9, github.com/apache/spark/pull/3242 |
| |
| [SPARK-4313][WebUI][Yarn] Fix link issue of the executor thread dump page in yarn-cluster mode |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-14 13:36:13 -0800 |
| Commit: 156cf33, github.com/apache/spark/pull/3183 |
| |
| SPARK-3663 Document SPARK_LOG_DIR and SPARK_PID_DIR |
| Andrew Ash <andrew@andrewash.com> |
| 2014-11-14 13:33:35 -0800 |
| Commit: 5c265cc, github.com/apache/spark/pull/2518 |
| |
| [Spark Core] SPARK-4380 Edit spilling log from MB to B |
| Hong Shen <hongshen@tencent.com> |
| 2014-11-14 13:29:41 -0800 |
| Commit: 0c56a03, github.com/apache/spark/pull/3243 |
| |
| [SPARK-4398][PySpark] specialize sc.parallelize(xrange) |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-14 12:43:17 -0800 |
| Commit: abd5817, github.com/apache/spark/pull/3264 |
| |
| [SPARK-4394][SQL] Data Sources API Improvements |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-14 12:00:08 -0800 |
| Commit: 77e845c, github.com/apache/spark/pull/3260 |
| |
| [SPARK-3722][Docs]minor improvement and fix in docs |
| WangTao <barneystinson@aliyun.com> |
| 2014-11-14 08:09:42 -0600 |
| Commit: e421072, github.com/apache/spark/pull/2579 |
| |
| [SPARK-4310][WebUI] Sort 'Submitted' column in Stage page by time |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-13 14:37:04 -0800 |
| Commit: 825709a, github.com/apache/spark/pull/3179 |
| |
| [SPARK-4372][MLLIB] Make LR and SVM's default parameters consistent in Scala and Python |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-13 13:54:16 -0800 |
| Commit: 3221830, github.com/apache/spark/pull/3232 |
| |
| [SPARK-4326] fix unidoc |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-13 13:16:20 -0800 |
| Commit: 4b0c1ed, github.com/apache/spark/pull/3253 |
| |
| [HOT FIX] make-distribution.sh fails if Yarn shuffle jar DNE |
| Andrew Or <andrew@databricks.com> |
| 2014-11-13 11:54:45 -0800 |
| Commit: a0fa1ba, github.com/apache/spark/pull/3250 |
| |
| [SPARK-4378][MLLIB] make ALS more Java-friendly |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-13 11:42:27 -0800 |
| Commit: ca26a21, github.com/apache/spark/pull/3240 |
| |
| [SPARK-4348] [PySpark] [MLlib] rename random.py to rand.py |
| Davies Liu <davies@databricks.com> |
| 2014-11-13 10:24:54 -0800 |
| Commit: ce0333f, github.com/apache/spark/pull/3216 |
| |
| [SPARK-4256] Make Binary Evaluation Metrics functions defined in cases where there ar... |
| Andrew Bullen <andrew.bullen@workday.com> |
| 2014-11-12 22:14:44 -0800 |
| Commit: 484fecb, github.com/apache/spark/pull/3118 |
| |
| [SPARK-4370] [Core] Limit number of Netty cores based on executor size |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-12 18:46:37 -0800 |
| Commit: b9e1c2e, github.com/apache/spark/pull/3155 |
| |
| [SPARK-4373][MLLIB] fix MLlib maven tests |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-12 18:15:14 -0800 |
| Commit: 23f5bdf, github.com/apache/spark/pull/3235 |
| |
| [Release] Bring audit scripts up-to-date |
| Andrew Or <andrew@databricks.com> |
| 2014-11-13 00:30:58 +0000 |
| Commit: 723a86b |
| |
| [SPARK-2672] support compressed file in wholeTextFile |
| Davies Liu <davies@databricks.com> |
| 2014-11-12 15:58:12 -0800 |
| Commit: d7d54a4, github.com/apache/spark/pull/3005 |
| |
| [SPARK-4369] [MLLib] fix TreeModel.predict() with RDD |
| Davies Liu <davies@databricks.com> |
| 2014-11-12 13:56:41 -0800 |
| Commit: bd86118, github.com/apache/spark/pull/3230 |
| |
| [SPARK-3666] Extract interfaces for EdgeRDD and VertexRDD |
| Ankur Dave <ankurdave@gmail.com> |
| 2014-11-12 13:49:20 -0800 |
| Commit: a5ef581, github.com/apache/spark/pull/2530 |
| |
| [Release] Correct make-distribution.sh log path |
| Andrew Or <andrew@databricks.com> |
| 2014-11-12 13:46:26 -0800 |
| Commit: c3afd32 |
| |
| Internal cleanup for aggregateMessages |
| Ankur Dave <ankurdave@gmail.com> |
| 2014-11-12 13:44:49 -0800 |
| Commit: 0402be9, github.com/apache/spark/pull/3231 |
| |
| [SPARK-4281][Build] Package Yarn shuffle service into its own jar |
| Andrew Or <andrew@databricks.com> |
| 2014-11-12 13:39:45 -0800 |
| Commit: aa43a8d, github.com/apache/spark/pull/3147 |
| |
| [Test] Better exception message from SparkSubmitSuite |
| Andrew Or <andrew@databricks.com> |
| 2014-11-12 13:35:48 -0800 |
| Commit: 6e3c5a2, github.com/apache/spark/pull/3212 |
| |
| [SPARK-3660][STREAMING] Initial RDD for updateStateByKey transformation |
| Soumitra Kumar <kumar.soumitra@gmail.com> |
| 2014-11-12 12:25:31 -0800 |
| Commit: 36ddeb7, github.com/apache/spark/pull/2665 |
| |
| [SPARK-3530][MLLIB] pipeline and parameters with examples |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-12 10:38:57 -0800 |
| Commit: 4b736db, github.com/apache/spark/pull/3099 |
| |
| [SPARK-4355][MLLIB] fix OnlineSummarizer.merge when other.mean is zero |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-12 01:50:11 -0800 |
| Commit: 84324fb, github.com/apache/spark/pull/3220 |
| |
| [SPARK-3936] Add aggregateMessages, which supersedes mapReduceTriplets |
| Ankur Dave <ankurdave@gmail.com> |
| 2014-11-11 23:38:27 -0800 |
| Commit: faeb41d, github.com/apache/spark/pull/3100 |
| |
| [MLLIB] SPARK-4347: Reducing GradientBoostingSuite run time. |
| Manish Amde <manish9ue@gmail.com> |
| 2014-11-11 22:47:53 -0800 |
| Commit: 2ef016b, github.com/apache/spark/pull/3214 |
| |
| Support cross building for Scala 2.11 |
| Prashant Sharma <prashant.s@imaginea.com>, Patrick Wendell <pwendell@gmail.com> |
| 2014-11-11 21:36:48 -0800 |
| Commit: daaca14, github.com/apache/spark/pull/3159 |
| |
| [Release] Log build output for each distribution |
| Andrew Or <andrew@databricks.com> |
| 2014-11-11 18:02:59 -0800 |
| Commit: 2ddb141 |
| |
| SPARK-2269 Refactor mesos scheduler resourceOffers and add unit test |
| Timothy Chen <tnachen@gmail.com> |
| 2014-11-11 14:29:18 -0800 |
| Commit: a878660, github.com/apache/spark/pull/1487 |
| |
| [SPARK-4282][YARN] Stopping flag in YarnClientSchedulerBackend should be volatile |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-11 12:33:53 -0600 |
| Commit: 7f37188, github.com/apache/spark/pull/3143 |
| |
| SPARK-4305 [BUILD] yarn-alpha profile won't build due to network/yarn module |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-11 12:30:35 -0600 |
| Commit: f820b56, github.com/apache/spark/pull/3167 |
| |
| SPARK-1830 Deploy failover, Make Persistence engine and LeaderAgent Pluggable |
| Prashant Sharma <prashant.s@imaginea.com> |
| 2014-11-11 09:29:48 -0800 |
| Commit: deefd9d, github.com/apache/spark/pull/771 |
| |
| [Streaming][Minor]Replace some 'if-else' in Clock |
| huangzhaowei <carlmartinmax@gmail.com> |
| 2014-11-11 03:02:12 -0800 |
| Commit: 6e03de3, github.com/apache/spark/pull/3088 |
| |
| [SPARK-2492][Streaming] kafkaReceiver minor changes to align with Kafka 0.8 |
| jerryshao <saisai.shao@intel.com> |
| 2014-11-11 02:22:23 -0800 |
| Commit: c8850a3, github.com/apache/spark/pull/1420 |
| |
| [SPARK-4295][External]Fix exception in SparkSinkSuite |
| maji2014 <maji3@asiainfo.com> |
| 2014-11-11 02:18:27 -0800 |
| Commit: f8811a5, github.com/apache/spark/pull/3177 |
| |
| [SPARK-4307] Initialize FileDescriptor lazily in FileRegion. |
| Reynold Xin <rxin@databricks.com>, Reynold Xin <rxin@apache.org> |
| 2014-11-11 00:25:31 -0800 |
| Commit: ef29a9a, github.com/apache/spark/pull/3172 |
| |
| [SPARK-4324] [PySpark] [MLlib] support numpy.array for all MLlib API |
| Davies Liu <davies@databricks.com> |
| 2014-11-10 22:26:16 -0800 |
| Commit: 65083e9, github.com/apache/spark/pull/3189 |
| |
| [SPARK-4330][Doc] Link to proper URL for YARN overview |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-10 22:18:00 -0800 |
| Commit: 3c07b8f, github.com/apache/spark/pull/3196 |
| |
| [SPARK-3649] Remove GraphX custom serializers |
| Ankur Dave <ankurdave@gmail.com> |
| 2014-11-10 19:31:52 -0800 |
| Commit: 300887b, github.com/apache/spark/pull/2503 |
| |
| [SPARK-4274] [SQL] Fix NPE in printing the details of the query plan |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-10 17:46:05 -0800 |
| Commit: c764d0a, github.com/apache/spark/pull/3139 |
| |
| [SPARK-3954][Streaming] Optimization to FileInputDStream |
| surq <surq@asiainfo.com> |
| 2014-11-10 17:37:16 -0800 |
| Commit: ce6ed2a, github.com/apache/spark/pull/2811 |
| |
| [SPARK-4149][SQL] ISO 8601 support for json date time strings |
| Daoyuan Wang <daoyuan.wang@intel.com> |
| 2014-11-10 17:26:03 -0800 |
| Commit: a1fc059, github.com/apache/spark/pull/3012 |
| |
| [SPARK-4250] [SQL] Fix bug of constant null value mapping to ConstantObjectInspector |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-10 17:22:57 -0800 |
| Commit: fa77783, github.com/apache/spark/pull/3114 |
| |
| [SQL] remove a decimal case branch that has no effect at runtime |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-10 17:20:52 -0800 |
| Commit: d793d80, github.com/apache/spark/pull/3192 |
| |
| [SPARK-4308][SQL] Sets SQL operation state to ERROR when exception is thrown |
| Cheng Lian <lian@databricks.com> |
| 2014-11-10 16:56:36 -0800 |
| Commit: acb55ae, github.com/apache/spark/pull/3175 |
| |
| [SPARK-4000][Build] Uploads HiveCompatibilitySuite logs |
| Cheng Lian <lian@databricks.com> |
| 2014-11-10 16:17:52 -0800 |
| Commit: 534b231, github.com/apache/spark/pull/2993 |
| |
| [SPARK-4319][SQL] Enable an ignored test "null count". |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-10 15:55:15 -0800 |
| Commit: dbf1058, github.com/apache/spark/pull/3185 |
| |
| Revert "[SPARK-2703][Core]Make Tachyon related unit tests execute without deploying a Tachyon system locally." |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-10 14:56:06 -0800 |
| Commit: 6e7a309 |
| |
| [SPARK-4047] - Generate runtime warnings for example implementation of PageRank |
| Varadharajan Mukundan <srinathsmn@gmail.com> |
| 2014-11-10 14:32:29 -0800 |
| Commit: 974d334, github.com/apache/spark/pull/2894 |
| |
| SPARK-1297 Upgrade HBase dependency to 0.98 |
| tedyu <yuzhihong@gmail.com> |
| 2014-11-10 13:23:33 -0800 |
| Commit: b32734e, github.com/apache/spark/pull/3115 |
| |
| SPARK-4230. Doc for spark.default.parallelism is incorrect |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-10 12:40:41 -0800 |
| Commit: c6f4e70, github.com/apache/spark/pull/3107 |
| |
| [SPARK-4312] bash doesn't have "die" |
| Jey Kottalam <jey@kottalam.net> |
| 2014-11-10 12:37:56 -0800 |
| Commit: c5db8e2, github.com/apache/spark/pull/2898 |
| |
| Update RecoverableNetworkWordCount.scala |
| comcmipi <pitonak@fns.uniba.sk> |
| 2014-11-10 12:33:48 -0800 |
| Commit: 0340c56, github.com/apache/spark/pull/2735 |
| |
| SPARK-2548 [STREAMING] JavaRecoverableWordCount is missing |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-10 11:47:27 -0800 |
| Commit: 3a02d41, github.com/apache/spark/pull/2564 |
| |
| [SPARK-4169] [Core] Accommodate non-English Locales in unit tests |
| Niklas Wilcke <1wilcke@informatik.uni-hamburg.de> |
| 2014-11-10 11:37:38 -0800 |
| Commit: ed8bf1e, github.com/apache/spark/pull/3036 |
| |
| [SQL] support udt to hive types conversion (hive->udt is not supported) |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-10 11:04:12 -0800 |
| Commit: 894a724, github.com/apache/spark/pull/3164 |
| |
| [SPARK-2703][Core]Make Tachyon related unit tests execute without deploying a Tachyon system locally. |
| RongGu <gurongwalker@gmail.com> |
| 2014-11-09 23:48:15 -0800 |
| Commit: bd86cb1, github.com/apache/spark/pull/3030 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-09 23:07:14 -0800 |
| Commit: 227488d, github.com/apache/spark/pull/2898 |
| |
| SPARK-3179. Add task OutputMetrics. |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-09 22:29:03 -0800 |
| Commit: 3c2cff4, github.com/apache/spark/pull/2968 |
| |
| SPARK-1209 [CORE] (Take 2) SparkHadoop{MapRed,MapReduce}Util should not use package org.apache.hadoop |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-09 22:11:20 -0800 |
| Commit: f8e5732, github.com/apache/spark/pull/3048 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-09 18:16:20 -0800 |
| Commit: f73b56f, github.com/apache/spark/pull/464 |
| |
| SPARK-1344 [DOCS] Scala API docs for top methods |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-09 17:42:08 -0800 |
| Commit: d136265, github.com/apache/spark/pull/3168 |
| |
| SPARK-971 [DOCS] Link to Confluence wiki from project website / documentation |
| Sean Owen <sowen@cloudera.com> |
| 2014-11-09 17:40:48 -0800 |
| Commit: 8c99a47, github.com/apache/spark/pull/3169 |
| |
| [SPARK-4301] StreamingContext should not allow start() to be called after calling stop() |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-08 18:10:23 -0800 |
| Commit: 7b41b17, github.com/apache/spark/pull/3160 |
| |
| [Minor] [Core] Don't NPE on closeQuietly(null) |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-08 13:03:51 -0800 |
| Commit: 4af5c7e, github.com/apache/spark/pull/3166 |
| |
| [SPARK-4291][Build] Rename network module projects |
| Andrew Or <andrew@databricks.com> |
| 2014-11-07 23:16:13 -0800 |
| Commit: 7afc856, github.com/apache/spark/pull/3148 |
| |
| [MLLIB] [PYTHON] SPARK-4221: Expose nonnegative ALS in the python API |
| Michelangelo D'Agostino <mdagostino@civisanalytics.com> |
| 2014-11-07 22:53:01 -0800 |
| Commit: 7e9d975, github.com/apache/spark/pull/3095 |
| |
| [SPARK-4304] [PySpark] Fix sort on empty RDD |
| Davies Liu <davies@databricks.com> |
| 2014-11-07 20:53:03 -0800 |
| Commit: 7779109, github.com/apache/spark/pull/3162 |
| |
| MAINTENANCE: Automated closing of pull requests. |
| Patrick Wendell <pwendell@gmail.com> |
| 2014-11-07 13:08:25 -0800 |
| Commit: 5923dd9, github.com/apache/spark/pull/3016 |
| |
| Update JavaCustomReceiver.java |
| xiao321 <1042460381@qq.com> |
| 2014-11-07 12:56:49 -0800 |
| Commit: 7c9ec52, github.com/apache/spark/pull/3153 |
| |
| [SPARK-4292][SQL] Result set iterator bug in JDBC/ODBC |
| wangfei <wangfei1@huawei.com> |
| 2014-11-07 12:55:11 -0800 |
| Commit: d6e5552, github.com/apache/spark/pull/3149 |
| |
| [SPARK-4203][SQL] Partition directories in random order when inserting into hive table |
| Matthew Taylor <matthew.t@tbfe.net> |
| 2014-11-07 12:53:08 -0800 |
| Commit: ac70c97, github.com/apache/spark/pull/3076 |
| |
| [SPARK-4270][SQL] Fix Cast from DateType to DecimalType. |
| Takuya UESHIN <ueshin@happy-camper.st> |
| 2014-11-07 12:30:47 -0800 |
| Commit: a6405c5, github.com/apache/spark/pull/3134 |
| |
| [SPARK-4272] [SQL] Add more unwrapper functions for primitive type in TableReader |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-07 12:15:53 -0800 |
| Commit: 60ab80f, github.com/apache/spark/pull/3136 |
| |
| [SPARK-4213][SQL] ParquetFilters - No support for LT, LTE, GT, GTE operators |
| Kousuke Saruta <sarutak@oss.nttdata.co.jp> |
| 2014-11-07 11:56:40 -0800 |
| Commit: 14c54f1, github.com/apache/spark/pull/3083 |
| |
| [SQL] Modify keyword val location according to ordering |
| Jacky Li <jacky.likun@gmail.com> |
| 2014-11-07 11:52:08 -0800 |
| Commit: 68609c5, github.com/apache/spark/pull/3080 |
| |
| [SQL] Support ScalaReflection of schema in different universes |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-07 11:51:20 -0800 |
| Commit: 8154ed7, github.com/apache/spark/pull/3096 |
| |
| [SPARK-4225][SQL] Resorts to SparkContext.version to inspect Spark version |
| Cheng Lian <lian@databricks.com> |
| 2014-11-07 11:45:25 -0800 |
| Commit: 86e9eaa, github.com/apache/spark/pull/3105 |
| |
| [SQL][DOC][Minor] Spark SQL Hive now support dynamic partitioning |
| wangfei <wangfei1@huawei.com> |
| 2014-11-07 11:43:35 -0800 |
| Commit: 636d7bc, github.com/apache/spark/pull/3127 |
| |
| [SPARK-4187] [Core] Switch to binary protocol for external shuffle service messages |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-07 09:42:21 -0800 |
| Commit: d4fa04e, github.com/apache/spark/pull/3146 |
| |
| [SPARK-4204][Core][WebUI] Change Utils.exceptionString to contain the inner exceptions and make the error information in Web UI more friendly |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-06 21:52:12 -0800 |
| Commit: 3abdb1b, github.com/apache/spark/pull/3073 |
| |
| [SPARK-4236] Cleanup removed applications' files in shuffle service |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-06 19:54:32 -0800 |
| Commit: 48a19a6, github.com/apache/spark/pull/3126 |
| |
| [SPARK-4188] [Core] Perform network-level retry of shuffle file fetches |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-06 18:39:14 -0800 |
| Commit: f165b2b, github.com/apache/spark/pull/3101 |
| |
| [SPARK-4277] Support external shuffle service on Standalone Worker |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-06 17:20:46 -0800 |
| Commit: 6e9ef10, github.com/apache/spark/pull/3142 |
| |
| [SPARK-3797] Minor addendum to Yarn shuffle service |
| Andrew Or <andrew@databricks.com> |
| 2014-11-06 17:18:49 -0800 |
| Commit: 96136f2, github.com/apache/spark/pull/3144 |
| |
| [HOT FIX] Make distribution fails |
| Andrew Or <andrew@databricks.com> |
| 2014-11-06 15:31:07 -0800 |
| Commit: 470881b, github.com/apache/spark/pull/3145 |
| |
| [SPARK-4249][GraphX]fix a problem of EdgePartitionBuilder in Graphx |
| lianhuiwang <lianhuiwang09@gmail.com> |
| 2014-11-06 10:46:45 -0800 |
| Commit: d15c6e9, github.com/apache/spark/pull/3138 |
| |
| [SPARK-4264] Completion iterator should only invoke callback once |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-06 10:45:46 -0800 |
| Commit: 23eaf0e, github.com/apache/spark/pull/3128 |
| |
| [SPARK-4186] add binaryFiles and binaryRecords in Python |
| Davies Liu <davies@databricks.com> |
| 2014-11-06 00:22:19 -0800 |
| Commit: b41a39e, github.com/apache/spark/pull/3078 |
| |
| [SPARK-4255] Fix incorrect table striping |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-11-06 00:03:03 -0800 |
| Commit: 5f27ae1, github.com/apache/spark/pull/3117 |
| |
| [SPARK-4137] [EC2] Don't change working dir on user |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-11-05 20:45:35 -0800 |
| Commit: db45f5a, github.com/apache/spark/pull/2988 |
| |
| [SPARK-4262][SQL] add .schemaRDD to JavaSchemaRDD |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-05 19:56:16 -0800 |
| Commit: 3d2b5bc, github.com/apache/spark/pull/3125 |
| |
| [SPARK-4254] [mllib] MovieLensALS bug fix |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-11-05 19:51:18 -0800 |
| Commit: c315d13, github.com/apache/spark/pull/3116 |
| |
| [SPARK-4158] Fix for missing resources. |
| Brenden Matthews <brenden@diddyinc.com> |
| 2014-11-05 16:02:44 -0800 |
| Commit: cb0eae3, github.com/apache/spark/pull/3024 |
| |
| SPARK-3223 runAsSparkUser cannot change HDFS write permission properly i... |
| Jongyoul Lee <jongyoul@gmail.com> |
| 2014-11-05 15:49:42 -0800 |
| Commit: f7ac8c2, github.com/apache/spark/pull/3034 |
| |
| SPARK-4040. Update documentation to exemplify use of local (n) value, fo... |
| jay@apache.org <jayunit100> |
| 2014-11-05 15:45:34 -0800 |
| Commit: 868cd4c, github.com/apache/spark/pull/2964 |
| |
| [SPARK-3797] Run external shuffle service in Yarn NM |
| Andrew Or <andrew@databricks.com> |
| 2014-11-05 15:42:05 -0800 |
| Commit: 61a5cce, github.com/apache/spark/pull/3082 |
| |
| SPARK-4222 [CORE] use readFully in FixedLengthBinaryRecordReader |
| industrial-sloth <industrial-sloth@users.noreply.github.com> |
| 2014-11-05 15:38:48 -0800 |
| Commit: f37817b, github.com/apache/spark/pull/3093 |
| |
| [SPARK-3984] [SPARK-3983] Fix incorrect scheduler delay and display task deserialization time in UI |
| Kay Ousterhout <kayousterhout@gmail.com> |
| 2014-11-05 15:30:31 -0800 |
| Commit: a46497e, github.com/apache/spark/pull/2832 |
| |
| [SPARK-4242] [Core] Add SASL to external shuffle service |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-05 14:38:43 -0800 |
| Commit: 4c42986, github.com/apache/spark/pull/3108 |
| |
| [SPARK-4197] [mllib] GradientBoosting API cleanup and examples in Scala, Java |
| Joseph K. Bradley <joseph@databricks.com> |
| 2014-11-05 10:33:13 -0800 |
| Commit: 5b3b6f6, github.com/apache/spark/pull/3094 |
| |
| [SPARK-4029][Streaming] Update streaming driver to reliably save and recover received block metadata on driver failures |
| Tathagata Das <tathagata.das1565@gmail.com> |
| 2014-11-05 01:21:53 -0800 |
| Commit: 5f13759, github.com/apache/spark/pull/3026 |
| |
| [SPARK-3964] [MLlib] [PySpark] add Hypothesis test Python API |
| Davies Liu <davies@databricks.com> |
| 2014-11-04 21:35:52 -0800 |
| Commit: c8abddc, github.com/apache/spark/pull/3091 |
| |
| [SQL] Add String option for DSL AS |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-04 18:14:28 -0800 |
| Commit: 515abb9, github.com/apache/spark/pull/3097 |
| |
| [SPARK-2938] Support SASL authentication in NettyBlockTransferService |
| Aaron Davidson <aaron@databricks.com> |
| 2014-11-04 16:15:38 -0800 |
| Commit: 5e73138, github.com/apache/spark/pull/3087 |
| |
| [Spark-4060] [MLlib] exposing special rdd functions to the public |
| Niklas Wilcke <1wilcke@informatik.uni-hamburg.de> |
| 2014-11-04 09:57:03 -0800 |
| Commit: f90ad5d, github.com/apache/spark/pull/2907 |
| |
| fixed MLlib Naive-Bayes java example bug |
| Dariusz Kobylarz <darek.kobylarz@gmail.com> |
| 2014-11-04 09:53:43 -0800 |
| Commit: bcecd73, github.com/apache/spark/pull/3081 |
| |
| [SPARK-3886] [PySpark] simplify serializer, use AutoBatchedSerializer by default. |
| Davies Liu <davies@databricks.com> |
| 2014-11-03 23:56:14 -0800 |
| Commit: e4f4263, github.com/apache/spark/pull/2920 |
| |
| [SPARK-4166][Core] Add a backward compatibility test for ExecutorLostFailure |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-03 22:47:45 -0800 |
| Commit: b671ce0, github.com/apache/spark/pull/3085 |
| |
| [SPARK-4163][Core] Add a backward compatibility test for FetchFailed |
| zsxwing <zsxwing@gmail.com> |
| 2014-11-03 22:40:43 -0800 |
| Commit: 9bdc841, github.com/apache/spark/pull/3086 |
| |
| [SPARK-3573][MLLIB] Make MLlib's Vector compatible with SQL's SchemaRDD |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-03 22:29:48 -0800 |
| Commit: 1a9c6cd, github.com/apache/spark/pull/3070 |
| |
| [SPARK-4192][SQL] Internal API for Python UDT |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-03 19:29:11 -0800 |
| Commit: 04450d1, github.com/apache/spark/pull/3068 |
| |
| [FIX][MLLIB] fix seed in BaggedPointSuite |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-03 18:50:37 -0800 |
| Commit: c5912ec, github.com/apache/spark/pull/3084 |
| |
| [SPARK-611] Display executor thread dumps in web UI |
| Josh Rosen <joshrosen@databricks.com> |
| 2014-11-03 18:18:47 -0800 |
| Commit: 4f035dd, github.com/apache/spark/pull/2944 |
| |
| [SPARK-4168][WebUI] web statges number should show correctly when stages are more than 1000 |
| Zhang, Liye <liye.zhang@intel.com> |
| 2014-11-03 18:17:32 -0800 |
| Commit: 97a466e, github.com/apache/spark/pull/3035 |
| |
| [SQL] Convert arguments to Scala UDFs |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-03 18:04:51 -0800 |
| Commit: 15b58a2, github.com/apache/spark/pull/3077 |
| |
| SPARK-4178. Hadoop input metrics ignore bytes read in RecordReader insta... |
| Sandy Ryza <sandy@cloudera.com> |
| 2014-11-03 15:19:01 -0800 |
| Commit: 2812815, github.com/apache/spark/pull/3045 |
| |
| [SQL] More aggressive defaults |
| Michael Armbrust <michael@databricks.com> |
| 2014-11-03 14:08:27 -0800 |
| Commit: 25bef7e, github.com/apache/spark/pull/3064 |
| |
| [SPARK-4152] [SQL] Avoid data change in CTAS while table already existed |
| Cheng Hao <hao.cheng@intel.com> |
| 2014-11-03 13:59:43 -0800 |
| Commit: e83f13e, github.com/apache/spark/pull/3013 |
| |
| [SPARK-4202][SQL] Simple DSL support for Scala UDF |
| Cheng Lian <lian@databricks.com> |
| 2014-11-03 13:20:33 -0800 |
| Commit: c238fb4, github.com/apache/spark/pull/3067 |
| |
| [SPARK-3594] [PySpark] [SQL] take more rows to infer schema or sampling |
| Davies Liu <davies.liu@gmail.com>, Davies Liu <davies@databricks.com> |
| 2014-11-03 13:17:09 -0800 |
| Commit: 24544fb, github.com/apache/spark/pull/2716 |
| |
| [SPARK-4207][SQL] Query which has syntax like 'not like' is not working in Spark SQL |
| ravipesala <ravindra.pesala@huawei.com> |
| 2014-11-03 13:07:41 -0800 |
| Commit: 2b6e1ce, github.com/apache/spark/pull/3075 |
| |
| [SPARK-4211][Build] Fixes hive.version in Maven profile hive-0.13.1 |
| fi <coderfi@gmail.com> |
| 2014-11-03 12:56:56 -0800 |
| Commit: df607da, github.com/apache/spark/pull/3072 |
| |
| [SPARK-4148][PySpark] fix seed distribution and add some tests for rdd.sample |
| Xiangrui Meng <meng@databricks.com> |
| 2014-11-03 12:24:24 -0800 |
| Commit: 3cca196, github.com/apache/spark/pull/3010 |
| |
| [EC2] Factor out Mesos spark-ec2 branch |
| Nicholas Chammas <nicholas.chammas@gmail.com> |
| 2014-11-03 09:02:35 -0800 |
| Commit: 2aca97c, github.com/apache/spark/pull/3008 |
| |