| |
| <!DOCTYPE html> |
| <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> |
| <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> |
| <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> |
| <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> |
| <head> |
| <meta charset="utf-8"> |
| <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> |
| <title>Old Migration Guides - MLlib - Spark 2.1.2 Documentation</title> |
| |
| <meta name="description" content="MLlib migration guides from before Spark 2.1.2"> |
| |
| |
| |
| |
| <link rel="stylesheet" href="css/bootstrap.min.css"> |
| <style> |
| body { |
| padding-top: 60px; |
| padding-bottom: 40px; |
| } |
| </style> |
| <meta name="viewport" content="width=device-width"> |
| <link rel="stylesheet" href="css/bootstrap-responsive.min.css"> |
| <link rel="stylesheet" href="css/main.css"> |
| |
| <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script> |
| |
| <link rel="stylesheet" href="css/pygments-default.css"> |
| |
| |
| <!-- Google analytics script --> |
| <script type="text/javascript"> |
| var _gaq = _gaq || []; |
| _gaq.push(['_setAccount', 'UA-32518208-2']); |
| _gaq.push(['_trackPageview']); |
| |
| (function() { |
| var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; |
| ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; |
| var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); |
| })(); |
| </script> |
| |
| |
| </head> |
| <body> |
| <!--[if lt IE 7]> |
| <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p> |
| <![endif]--> |
| |
| <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html --> |
| |
| <div class="navbar navbar-fixed-top" id="topbar"> |
| <div class="navbar-inner"> |
| <div class="container"> |
| <div class="brand"><a href="index.html"> |
| <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">2.1.2</span> |
| </div> |
| <ul class="nav"> |
| <!--TODO(andyk): Add class="active" attribute to li some how.--> |
| <li><a href="index.html">Overview</a></li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="quick-start.html">Quick Start</a></li> |
| <li><a href="programming-guide.html">Spark Programming Guide</a></li> |
| <li class="divider"></li> |
| <li><a href="streaming-programming-guide.html">Spark Streaming</a></li> |
| <li><a href="sql-programming-guide.html">DataFrames, Datasets and SQL</a></li> |
| <li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li> |
| <li><a href="ml-guide.html">MLlib (Machine Learning)</a></li> |
| <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li> |
| <li><a href="sparkr.html">SparkR (R on Spark)</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li> |
| <li><a href="api/java/index.html">Java</a></li> |
| <li><a href="api/python/index.html">Python</a></li> |
| <li><a href="api/R/index.html">R</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="cluster-overview.html">Overview</a></li> |
| <li><a href="submitting-applications.html">Submitting Applications</a></li> |
| <li class="divider"></li> |
| <li><a href="spark-standalone.html">Spark Standalone</a></li> |
| <li><a href="running-on-mesos.html">Mesos</a></li> |
| <li><a href="running-on-yarn.html">YARN</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="configuration.html">Configuration</a></li> |
| <li><a href="monitoring.html">Monitoring</a></li> |
| <li><a href="tuning.html">Tuning Guide</a></li> |
| <li><a href="job-scheduling.html">Job Scheduling</a></li> |
| <li><a href="security.html">Security</a></li> |
| <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li> |
| <li class="divider"></li> |
| <li><a href="building-spark.html">Building Spark</a></li> |
| <li><a href="http://spark.apache.org/contributing.html">Contributing to Spark</a></li> |
| <li><a href="http://spark.apache.org/third-party-projects.html">Third Party Projects</a></li> |
| </ul> |
| </li> |
| </ul> |
| <!--<p class="navbar-text pull-right"><span class="version-text">v2.1.2</span></p>--> |
| </div> |
| </div> |
| </div> |
| |
| <div class="container-wrapper"> |
| |
| |
| <div class="left-menu-wrapper"> |
| <div class="left-menu"> |
| <h3><a href="ml-guide.html">MLlib: Main Guide</a></h3> |
| |
| <ul> |
| |
| <li> |
| <a href="ml-pipeline.html"> |
| |
| Pipelines |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-features.html"> |
| |
| Extracting, transforming and selecting features |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-classification-regression.html"> |
| |
| Classification and Regression |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-clustering.html"> |
| |
| Clustering |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-collaborative-filtering.html"> |
| |
| Collaborative filtering |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-tuning.html"> |
| |
| Model selection and tuning |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="ml-advanced.html"> |
| |
| Advanced topics |
| |
| </a> |
| </li> |
| |
| |
| </ul> |
| |
| <h3><a href="mllib-guide.html">MLlib: RDD-based API Guide</a></h3> |
| |
| <ul> |
| |
| <li> |
| <a href="mllib-data-types.html"> |
| |
| Data types |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-statistics.html"> |
| |
| Basic statistics |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-classification-regression.html"> |
| |
| Classification and regression |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-collaborative-filtering.html"> |
| |
| Collaborative filtering |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-clustering.html"> |
| |
| Clustering |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-dimensionality-reduction.html"> |
| |
| Dimensionality reduction |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-feature-extraction.html"> |
| |
| Feature extraction and transformation |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-frequent-pattern-mining.html"> |
| |
| Frequent pattern mining |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-evaluation-metrics.html"> |
| |
| Evaluation metrics |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-pmml-model-export.html"> |
| |
| PMML model export |
| |
| </a> |
| </li> |
| |
| |
| <li> |
| <a href="mllib-optimization.html"> |
| |
| Optimization (developer) |
| |
| </a> |
| </li> |
| |
| |
| </ul> |
| |
| </div> |
| </div> |
| <input id="nav-trigger" class="nav-trigger" checked type="checkbox"> |
| <label for="nav-trigger"></label> |
| <div class="content-with-sidebar" id="content"> |
| |
| <h1 class="title">Old Migration Guides - MLlib</h1> |
| |
| |
| <p>The migration guide for the current Spark version is kept on the <a href="ml-guide.html#migration-guide">MLlib Guide main page</a>.</p> |
| |
| <h2 id="from-16-to-20">From 1.6 to 2.0</h2> |
| |
| <h3 id="breaking-changes">Breaking changes</h3> |
| |
| <p>There were several breaking changes in Spark 2.0, which are outlined below.</p> |
| |
| <p><strong>Linear algebra classes for DataFrame-based APIs</strong></p> |
| |
| <p>Spark’s linear algebra dependencies were moved to a new project, <code>mllib-local</code> |
| (see <a href="https://issues.apache.org/jira/browse/SPARK-13944">SPARK-13944</a>). |
| As part of this change, the linear algebra classes were copied to a new package, <code>spark.ml.linalg</code>. |
| The DataFrame-based APIs in <code>spark.ml</code> now depend on the <code>spark.ml.linalg</code> classes, |
| leading to a few breaking changes, predominantly in various model classes |
| (see <a href="https://issues.apache.org/jira/browse/SPARK-14810">SPARK-14810</a> for a full list).</p> |
| |
| <p><strong>Note:</strong> the RDD-based APIs in <code>spark.mllib</code> continue to depend on the previous package <code>spark.mllib.linalg</code>.</p> |
| |
| <p><em>Converting vectors and matrices</em></p> |
| |
| <p>While most pipeline components support backward compatibility for loading, |
| some existing <code>DataFrames</code> and pipelines in Spark versions prior to 2.0, that contain vector or matrix |
| columns, may need to be migrated to the new <code>spark.ml</code> vector and matrix types. |
| Utilities for converting <code>DataFrame</code> columns from <code>spark.mllib.linalg</code> to <code>spark.ml.linalg</code> types |
| (and vice versa) can be found in <code>spark.mllib.util.MLUtils</code>.</p> |
| |
| <p>There are also utility methods available for converting single instances of |
| vectors and matrices. Use the <code>asML</code> method on a <code>mllib.linalg.Vector</code> / <code>mllib.linalg.Matrix</code> |
| for converting to <code>ml.linalg</code> types, and |
| <code>mllib.linalg.Vectors.fromML</code> / <code>mllib.linalg.Matrices.fromML</code> |
| for converting to <code>mllib.linalg</code> types.</p> |
| |
| <div class="codetabs"> |
| <div data-lang="scala"> |
| |
| <figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span> |
| |
| <span class="c1">// convert DataFrame columns</span> |
| <span class="k">val</span> <span class="n">convertedVecDF</span> <span class="k">=</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">convertVectorColumnsToML</span><span class="o">(</span><span class="n">vecDF</span><span class="o">)</span> |
| <span class="k">val</span> <span class="n">convertedMatrixDF</span> <span class="k">=</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="n">convertMatrixColumnsToML</span><span class="o">(</span><span class="n">matrixDF</span><span class="o">)</span> |
| <span class="c1">// convert a single vector or matrix</span> |
| <span class="k">val</span> <span class="n">mlVec</span><span class="k">:</span> <span class="kt">org.apache.spark.ml.linalg.Vector</span> <span class="o">=</span> <span class="n">mllibVec</span><span class="o">.</span><span class="n">asML</span> |
| <span class="k">val</span> <span class="n">mlMat</span><span class="k">:</span> <span class="kt">org.apache.spark.ml.linalg.Matrix</span> <span class="o">=</span> <span class="n">mllibMat</span><span class="o">.</span><span class="n">asML</span></code></pre></figure> |
| |
| <p>Refer to the <a href="api/scala/index.html#org.apache.spark.mllib.util.MLUtils$"><code>MLUtils</code> Scala docs</a> for further detail.</p> |
| </div> |
| |
| <div data-lang="java"> |
| |
| <figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span><span class="o">;</span> |
| <span class="kn">import</span> <span class="nn">org.apache.spark.sql.Dataset</span><span class="o">;</span> |
| |
| <span class="c1">// convert DataFrame columns</span> |
| <span class="n">Dataset</span><span class="o"><</span><span class="n">Row</span><span class="o">></span> <span class="n">convertedVecDF</span> <span class="o">=</span> <span class="n">MLUtils</span><span class="o">.</span><span class="na">convertVectorColumnsToML</span><span class="o">(</span><span class="n">vecDF</span><span class="o">);</span> |
| <span class="n">Dataset</span><span class="o"><</span><span class="n">Row</span><span class="o">></span> <span class="n">convertedMatrixDF</span> <span class="o">=</span> <span class="n">MLUtils</span><span class="o">.</span><span class="na">convertMatrixColumnsToML</span><span class="o">(</span><span class="n">matrixDF</span><span class="o">);</span> |
| <span class="c1">// convert a single vector or matrix</span> |
| <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">spark</span><span class="o">.</span><span class="na">ml</span><span class="o">.</span><span class="na">linalg</span><span class="o">.</span><span class="na">Vector</span> <span class="n">mlVec</span> <span class="o">=</span> <span class="n">mllibVec</span><span class="o">.</span><span class="na">asML</span><span class="o">();</span> |
| <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">spark</span><span class="o">.</span><span class="na">ml</span><span class="o">.</span><span class="na">linalg</span><span class="o">.</span><span class="na">Matrix</span> <span class="n">mlMat</span> <span class="o">=</span> <span class="n">mllibMat</span><span class="o">.</span><span class="na">asML</span><span class="o">();</span></code></pre></figure> |
| |
| <p>Refer to the <a href="api/java/org/apache/spark/mllib/util/MLUtils.html"><code>MLUtils</code> Java docs</a> for further detail.</p> |
| </div> |
| |
| <div data-lang="python"> |
| |
| <figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">pyspark.mllib.util</span> <span class="kn">import</span> <span class="n">MLUtils</span> |
| |
| <span class="c"># convert DataFrame columns</span> |
| <span class="n">convertedVecDF</span> <span class="o">=</span> <span class="n">MLUtils</span><span class="o">.</span><span class="n">convertVectorColumnsToML</span><span class="p">(</span><span class="n">vecDF</span><span class="p">)</span> |
| <span class="n">convertedMatrixDF</span> <span class="o">=</span> <span class="n">MLUtils</span><span class="o">.</span><span class="n">convertMatrixColumnsToML</span><span class="p">(</span><span class="n">matrixDF</span><span class="p">)</span> |
| <span class="c"># convert a single vector or matrix</span> |
| <span class="n">mlVec</span> <span class="o">=</span> <span class="n">mllibVec</span><span class="o">.</span><span class="n">asML</span><span class="p">()</span> |
| <span class="n">mlMat</span> <span class="o">=</span> <span class="n">mllibMat</span><span class="o">.</span><span class="n">asML</span><span class="p">()</span></code></pre></figure> |
| |
| <p>Refer to the <a href="api/python/pyspark.mllib.html#pyspark.mllib.util.MLUtils"><code>MLUtils</code> Python docs</a> for further detail.</p> |
| </div> |
| </div> |
| |
| <p><strong>Deprecated methods removed</strong></p> |
| |
| <p>Several deprecated methods were removed in the <code>spark.mllib</code> and <code>spark.ml</code> packages:</p> |
| |
| <ul> |
| <li><code>setScoreCol</code> in <code>ml.evaluation.BinaryClassificationEvaluator</code></li> |
| <li><code>weights</code> in <code>LinearRegression</code> and <code>LogisticRegression</code> in <code>spark.ml</code></li> |
| <li><code>setMaxNumIterations</code> in <code>mllib.optimization.LBFGS</code> (marked as <code>DeveloperApi</code>)</li> |
| <li><code>treeReduce</code> and <code>treeAggregate</code> in <code>mllib.rdd.RDDFunctions</code> (these functions are available on <code>RDD</code>s directly, and were marked as <code>DeveloperApi</code>)</li> |
| <li><code>defaultStategy</code> in <code>mllib.tree.configuration.Strategy</code></li> |
| <li><code>build</code> in <code>mllib.tree.Node</code></li> |
| <li>libsvm loaders for multiclass and load/save labeledData methods in <code>mllib.util.MLUtils</code></li> |
| </ul> |
| |
| <p>A full list of breaking changes can be found at <a href="https://issues.apache.org/jira/browse/SPARK-14810">SPARK-14810</a>.</p> |
| |
| <h3 id="deprecations-and-changes-of-behavior">Deprecations and changes of behavior</h3> |
| |
| <p><strong>Deprecations</strong></p> |
| |
| <p>Deprecations in the <code>spark.mllib</code> and <code>spark.ml</code> packages include:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-14984">SPARK-14984</a>: |
| In <code>spark.ml.regression.LinearRegressionSummary</code>, the <code>model</code> field has been deprecated.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-13784">SPARK-13784</a>: |
| In <code>spark.ml.regression.RandomForestRegressionModel</code> and <code>spark.ml.classification.RandomForestClassificationModel</code>, |
| the <code>numTrees</code> parameter has been deprecated in favor of <code>getNumTrees</code> method.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-13761">SPARK-13761</a>: |
| In <code>spark.ml.param.Params</code>, the <code>validateParams</code> method has been deprecated. |
| We move all functionality in overridden methods to the corresponding <code>transformSchema</code>.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-14829">SPARK-14829</a>: |
| In <code>spark.mllib</code> package, <code>LinearRegressionWithSGD</code>, <code>LassoWithSGD</code>, <code>RidgeRegressionWithSGD</code> and <code>LogisticRegressionWithSGD</code> have been deprecated. |
| We encourage users to use <code>spark.ml.regression.LinearRegresson</code> and <code>spark.ml.classification.LogisticRegresson</code>.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-14900">SPARK-14900</a>: |
| In <code>spark.mllib.evaluation.MulticlassMetrics</code>, the parameters <code>precision</code>, <code>recall</code> and <code>fMeasure</code> have been deprecated in favor of <code>accuracy</code>.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-15644">SPARK-15644</a>: |
| In <code>spark.ml.util.MLReader</code> and <code>spark.ml.util.MLWriter</code>, the <code>context</code> method has been deprecated in favor of <code>session</code>.</li> |
| <li>In <code>spark.ml.feature.ChiSqSelectorModel</code>, the <code>setLabelCol</code> method has been deprecated since it was not used by <code>ChiSqSelectorModel</code>.</li> |
| </ul> |
| |
| <p><strong>Changes of behavior</strong></p> |
| |
| <p>Changes of behavior in the <code>spark.mllib</code> and <code>spark.ml</code> packages include:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-7780">SPARK-7780</a>: |
| <code>spark.mllib.classification.LogisticRegressionWithLBFGS</code> directly calls <code>spark.ml.classification.LogisticRegresson</code> for binary classification now. |
| This will introduce the following behavior changes for <code>spark.mllib.classification.LogisticRegressionWithLBFGS</code>: |
| <ul> |
| <li>The intercept will not be regularized when training binary classification model with L1/L2 Updater.</li> |
| <li>If users set without regularization, training with or without feature scaling will return the same solution by the same convergence rate.</li> |
| </ul> |
| </li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-13429">SPARK-13429</a>: |
| In order to provide better and consistent result with <code>spark.ml.classification.LogisticRegresson</code>, |
| the default value of <code>spark.mllib.classification.LogisticRegressionWithLBFGS</code>: <code>convergenceTol</code> has been changed from 1E-4 to 1E-6.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-12363">SPARK-12363</a>: |
| Fix a bug of <code>PowerIterationClustering</code> which will likely change its result.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-13048">SPARK-13048</a>: |
| <code>LDA</code> using the <code>EM</code> optimizer will keep the last checkpoint by default, if checkpointing is being used.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-12153">SPARK-12153</a>: |
| <code>Word2Vec</code> now respects sentence boundaries. Previously, it did not handle them correctly.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-10574">SPARK-10574</a>: |
| <code>HashingTF</code> uses <code>MurmurHash3</code> as default hash algorithm in both <code>spark.ml</code> and <code>spark.mllib</code>.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-14768">SPARK-14768</a>: |
| The <code>expectedType</code> argument for PySpark <code>Param</code> was removed.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-14931">SPARK-14931</a>: |
| Some default <code>Param</code> values, which were mismatched between pipelines in Scala and Python, have been changed.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-13600">SPARK-13600</a>: |
| <code>QuantileDiscretizer</code> now uses <code>spark.sql.DataFrameStatFunctions.approxQuantile</code> to find splits (previously used custom sampling logic). |
| The output buckets will differ for same input data and params.</li> |
| </ul> |
| |
| <h2 id="from-15-to-16">From 1.5 to 1.6</h2> |
| |
| <p>There are no breaking API changes in the <code>spark.mllib</code> or <code>spark.ml</code> packages, but there are |
| deprecations and changes of behavior.</p> |
| |
| <p>Deprecations:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-11358">SPARK-11358</a>: |
| In <code>spark.mllib.clustering.KMeans</code>, the <code>runs</code> parameter has been deprecated.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-10592">SPARK-10592</a>: |
| In <code>spark.ml.classification.LogisticRegressionModel</code> and |
| <code>spark.ml.regression.LinearRegressionModel</code>, the <code>weights</code> field has been deprecated in favor of |
| the new name <code>coefficients</code>. This helps disambiguate from instance (row) “weights” given to |
| algorithms.</li> |
| </ul> |
| |
| <p>Changes of behavior:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-7770">SPARK-7770</a>: |
| <code>spark.mllib.tree.GradientBoostedTrees</code>: <code>validationTol</code> has changed semantics in 1.6. |
| Previously, it was a threshold for absolute change in error. Now, it resembles the behavior of |
| <code>GradientDescent</code>’s <code>convergenceTol</code>: For large errors, it uses relative error (relative to the |
| previous error); for small errors (<code>< 0.01</code>), it uses absolute error.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-11069">SPARK-11069</a>: |
| <code>spark.ml.feature.RegexTokenizer</code>: Previously, it did not convert strings to lowercase before |
| tokenizing. Now, it converts to lowercase by default, with an option not to. This matches the |
| behavior of the simpler <code>Tokenizer</code> transformer.</li> |
| </ul> |
| |
| <h2 id="from-14-to-15">From 1.4 to 1.5</h2> |
| |
| <p>In the <code>spark.mllib</code> package, there are no breaking API changes but several behavior changes:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-9005">SPARK-9005</a>: |
| <code>RegressionMetrics.explainedVariance</code> returns the average regression sum of squares.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-8600">SPARK-8600</a>: <code>NaiveBayesModel.labels</code> become |
| sorted.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-3382">SPARK-3382</a>: <code>GradientDescent</code> has a default |
| convergence tolerance <code>1e-3</code>, and hence iterations might end earlier than 1.4.</li> |
| </ul> |
| |
| <p>In the <code>spark.ml</code> package, there exists one breaking API change and one behavior change:</p> |
| |
| <ul> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-9268">SPARK-9268</a>: Java’s varargs support is removed |
| from <code>Params.setDefault</code> due to a |
| <a href="https://issues.scala-lang.org/browse/SI-9013">Scala compiler bug</a>.</li> |
| <li><a href="https://issues.apache.org/jira/browse/SPARK-10097">SPARK-10097</a>: <code>Evaluator.isLargerBetter</code> is |
| added to indicate metric ordering. Metrics like RMSE no longer flip signs as in 1.4.</li> |
| </ul> |
| |
| <h2 id="from-13-to-14">From 1.3 to 1.4</h2> |
| |
| <p>In the <code>spark.mllib</code> package, there were several breaking changes, but all in <code>DeveloperApi</code> or <code>Experimental</code> APIs:</p> |
| |
| <ul> |
| <li>Gradient-Boosted Trees |
| <ul> |
| <li><em>(Breaking change)</em> The signature of the <a href="api/scala/index.html#org.apache.spark.mllib.tree.loss.Loss"><code>Loss.gradient</code></a> method was changed. This is only an issues for users who wrote their own losses for GBTs.</li> |
| <li><em>(Breaking change)</em> The <code>apply</code> and <code>copy</code> methods for the case class <a href="api/scala/index.html#org.apache.spark.mllib.tree.configuration.BoostingStrategy"><code>BoostingStrategy</code></a> have been changed because of a modification to the case class fields. This could be an issue for users who use <code>BoostingStrategy</code> to set GBT parameters.</li> |
| </ul> |
| </li> |
| <li><em>(Breaking change)</em> The return value of <a href="api/scala/index.html#org.apache.spark.mllib.clustering.LDA"><code>LDA.run</code></a> has changed. It now returns an abstract class <code>LDAModel</code> instead of the concrete class <code>DistributedLDAModel</code>. The object of type <code>LDAModel</code> can still be cast to the appropriate concrete type, which depends on the optimization algorithm.</li> |
| </ul> |
| |
| <p>In the <code>spark.ml</code> package, several major API changes occurred, including:</p> |
| |
| <ul> |
| <li><code>Param</code> and other APIs for specifying parameters</li> |
| <li><code>uid</code> unique IDs for Pipeline components</li> |
| <li>Reorganization of certain classes</li> |
| </ul> |
| |
| <p>Since the <code>spark.ml</code> API was an alpha component in Spark 1.3, we do not list all changes here. |
| However, since 1.4 <code>spark.ml</code> is no longer an alpha component, we will provide details on any API |
| changes for future releases.</p> |
| |
| <h2 id="from-12-to-13">From 1.2 to 1.3</h2> |
| |
| <p>In the <code>spark.mllib</code> package, there were several breaking changes. The first change (in <code>ALS</code>) is the only one in a component not marked as Alpha or Experimental.</p> |
| |
| <ul> |
| <li><em>(Breaking change)</em> In <a href="api/scala/index.html#org.apache.spark.mllib.recommendation.ALS"><code>ALS</code></a>, the extraneous method <code>solveLeastSquares</code> has been removed. The <code>DeveloperApi</code> method <code>analyzeBlocks</code> was also removed.</li> |
| <li><em>(Breaking change)</em> <a href="api/scala/index.html#org.apache.spark.mllib.feature.StandardScalerModel"><code>StandardScalerModel</code></a> remains an Alpha component. In it, the <code>variance</code> method has been replaced with the <code>std</code> method. To compute the column variance values returned by the original <code>variance</code> method, simply square the standard deviation values returned by <code>std</code>.</li> |
| <li><em>(Breaking change)</em> <a href="api/scala/index.html#org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD"><code>StreamingLinearRegressionWithSGD</code></a> remains an Experimental component. In it, there were two changes: |
| <ul> |
| <li>The constructor taking arguments was removed in favor of a builder pattern using the default constructor plus parameter setter methods.</li> |
| <li>Variable <code>model</code> is no longer public.</li> |
| </ul> |
| </li> |
| <li><em>(Breaking change)</em> <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a> remains an Experimental component. In it and its associated classes, there were several changes: |
| <ul> |
| <li>In <code>DecisionTree</code>, the deprecated class method <code>train</code> has been removed. (The object/static <code>train</code> methods remain.)</li> |
| <li>In <code>Strategy</code>, the <code>checkpointDir</code> parameter has been removed. Checkpointing is still supported, but the checkpoint directory must be set before calling tree and tree ensemble training.</li> |
| </ul> |
| </li> |
| <li><code>PythonMLlibAPI</code> (the interface between Scala/Java and Python for MLlib) was a public API but is now private, declared <code>private[python]</code>. This was never meant for external use.</li> |
| <li>In linear regression (including Lasso and ridge regression), the squared loss is now divided by 2. |
| So in order to produce the same result as in 1.2, the regularization parameter needs to be divided by 2 and the step size needs to be multiplied by 2.</li> |
| </ul> |
| |
| <p>In the <code>spark.ml</code> package, the main API changes are from Spark SQL. We list the most important changes here:</p> |
| |
| <ul> |
| <li>The old <a href="http://spark.apache.org/docs/1.2.1/api/scala/index.html#org.apache.spark.sql.SchemaRDD">SchemaRDD</a> has been replaced with <a href="api/scala/index.html#org.apache.spark.sql.DataFrame">DataFrame</a> with a somewhat modified API. All algorithms in <code>spark.ml</code> which used to use SchemaRDD now use DataFrame.</li> |
| <li>In Spark 1.2, we used implicit conversions from <code>RDD</code>s of <code>LabeledPoint</code> into <code>SchemaRDD</code>s by calling <code>import sqlContext._</code> where <code>sqlContext</code> was an instance of <code>SQLContext</code>. These implicits have been moved, so we now call <code>import sqlContext.implicits._</code>.</li> |
| <li>Java APIs for SQL have also changed accordingly. Please see the examples above and the <a href="sql-programming-guide.html">Spark SQL Programming Guide</a> for details.</li> |
| </ul> |
| |
| <p>Other changes were in <code>LogisticRegression</code>:</p> |
| |
| <ul> |
| <li>The <code>scoreCol</code> output column (with default value “score”) was renamed to be <code>probabilityCol</code> (with default value “probability”). The type was originally <code>Double</code> (for the probability of class 1.0), but it is now <code>Vector</code> (for the probability of each class, to support multiclass classification in the future).</li> |
| <li>In Spark 1.2, <code>LogisticRegressionModel</code> did not include an intercept. In Spark 1.3, it includes an intercept; however, it will always be 0.0 since it uses the default settings for <a href="api/scala/index.html#org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS">spark.mllib.LogisticRegressionWithLBFGS</a>. The option to use an intercept will be added in the future.</li> |
| </ul> |
| |
| <h2 id="from-11-to-12">From 1.1 to 1.2</h2> |
| |
| <p>The only API changes in MLlib v1.2 are in |
| <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a>, |
| which continues to be an experimental API in MLlib 1.2:</p> |
| |
| <ol> |
| <li> |
| <p><em>(Breaking change)</em> The Scala API for classification takes a named argument specifying the number |
| of classes. In MLlib v1.1, this argument was called <code>numClasses</code> in Python and |
| <code>numClassesForClassification</code> in Scala. In MLlib v1.2, the names are both set to <code>numClasses</code>. |
| This <code>numClasses</code> parameter is specified either via |
| <a href="api/scala/index.html#org.apache.spark.mllib.tree.configuration.Strategy"><code>Strategy</code></a> |
| or via <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a> |
| static <code>trainClassifier</code> and <code>trainRegressor</code> methods.</p> |
| </li> |
| <li> |
| <p><em>(Breaking change)</em> The API for |
| <a href="api/scala/index.html#org.apache.spark.mllib.tree.model.Node"><code>Node</code></a> has changed. |
| This should generally not affect user code, unless the user manually constructs decision trees |
| (instead of using the <code>trainClassifier</code> or <code>trainRegressor</code> methods). |
| The tree <code>Node</code> now includes more information, including the probability of the predicted label |
| (for classification).</p> |
| </li> |
| <li> |
| <p>Printing methods’ output has changed. The <code>toString</code> (Scala/Java) and <code>__repr__</code> (Python) methods used to print the full model; they now print a summary. For the full model, use <code>toDebugString</code>.</p> |
| </li> |
| </ol> |
| |
| <p>Examples in the Spark distribution and examples in the |
| <a href="mllib-decision-tree.html#examples">Decision Trees Guide</a> have been updated accordingly.</p> |
| |
| <h2 id="from-10-to-11">From 1.0 to 1.1</h2> |
| |
| <p>The only API changes in MLlib v1.1 are in |
| <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a>, |
| which continues to be an experimental API in MLlib 1.1:</p> |
| |
| <ol> |
| <li> |
| <p><em>(Breaking change)</em> The meaning of tree depth has been changed by 1 in order to match |
| the implementations of trees in |
| <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.tree">scikit-learn</a> |
| and in <a href="http://cran.r-project.org/web/packages/rpart/index.html">rpart</a>. |
| In MLlib v1.0, a depth-1 tree had 1 leaf node, and a depth-2 tree had 1 root node and 2 leaf nodes. |
| In MLlib v1.1, a depth-0 tree has 1 leaf node, and a depth-1 tree has 1 root node and 2 leaf nodes. |
| This depth is specified by the <code>maxDepth</code> parameter in |
| <a href="api/scala/index.html#org.apache.spark.mllib.tree.configuration.Strategy"><code>Strategy</code></a> |
| or via <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a> |
| static <code>trainClassifier</code> and <code>trainRegressor</code> methods.</p> |
| </li> |
| <li> |
| <p><em>(Non-breaking change)</em> We recommend using the newly added <code>trainClassifier</code> and <code>trainRegressor</code> |
| methods to build a <a href="api/scala/index.html#org.apache.spark.mllib.tree.DecisionTree"><code>DecisionTree</code></a>, |
| rather than using the old parameter class <code>Strategy</code>. These new training methods explicitly |
| separate classification and regression, and they replace specialized parameter types with |
| simple <code>String</code> types.</p> |
| </li> |
| </ol> |
| |
| <p>Examples of the new, recommended <code>trainClassifier</code> and <code>trainRegressor</code> are given in the |
| <a href="mllib-decision-tree.html#examples">Decision Trees Guide</a>.</p> |
| |
| <h2 id="from-09-to-10">From 0.9 to 1.0</h2> |
| |
| <p>In MLlib v1.0, we support both dense and sparse input in a unified way, which introduces a few |
| breaking changes. If your data is sparse, please store it in a sparse format instead of dense to |
| take advantage of sparsity in both storage and computation. Details are described below.</p> |
| |
| |
| |
| </div> |
| |
| <!-- /container --> |
| </div> |
| |
| <script src="js/vendor/jquery-1.8.0.min.js"></script> |
| <script src="js/vendor/bootstrap.min.js"></script> |
| <script src="js/vendor/anchor.min.js"></script> |
| <script src="js/main.js"></script> |
| |
| <!-- MathJax Section --> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| TeX: { equationNumbers: { autoNumber: "AMS" } } |
| }); |
| </script> |
| <script> |
| // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS. |
| // We could use "//cdn.mathjax...", but that won't support "file://". |
| (function(d, script) { |
| script = d.createElement('script'); |
| script.type = 'text/javascript'; |
| script.async = true; |
| script.onload = function(){ |
| MathJax.Hub.Config({ |
| tex2jax: { |
| inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ], |
| displayMath: [ ["$$","$$"], ["\\[", "\\]"] ], |
| processEscapes: true, |
| skipTags: ['script', 'noscript', 'style', 'textarea', 'pre'] |
| } |
| }); |
| }; |
| script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + |
| 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML'; |
| d.getElementsByTagName('head')[0].appendChild(script); |
| }(document)); |
| </script> |
| </body> |
| </html> |