blob: 5e0321a4e30630db5c833876013bbeb2dfd3c4ff [file] [log] [blame]
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>MLlib: Main Guide - Spark 2.4.4 Documentation</title>
<link rel="stylesheet" href="css/bootstrap.min.css">
<style>
body {
padding-top: 60px;
padding-bottom: 40px;
}
</style>
<meta name="viewport" content="width=device-width">
<link rel="stylesheet" href="css/bootstrap-responsive.min.css">
<link rel="stylesheet" href="css/main.css">
<script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
<link rel="stylesheet" href="css/pygments-default.css">
<!-- Google analytics script -->
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-32518208-2']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body>
<!--[if lt IE 7]>
<p class="chromeframe">You are using an outdated browser. <a href="https://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
<![endif]-->
<!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
<div class="navbar navbar-fixed-top" id="topbar">
<div class="navbar-inner">
<div class="container">
<div class="brand"><a href="index.html">
<img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">2.4.4</span>
</div>
<ul class="nav">
<!--TODO(andyk): Add class="active" attribute to li some how.-->
<li><a href="index.html">Overview</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="quick-start.html">Quick Start</a></li>
<li><a href="rdd-programming-guide.html">RDDs, Accumulators, Broadcasts Vars</a></li>
<li><a href="sql-programming-guide.html">SQL, DataFrames, and Datasets</a></li>
<li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li>
<li><a href="streaming-programming-guide.html">Spark Streaming (DStreams)</a></li>
<li><a href="ml-guide.html">MLlib (Machine Learning)</a></li>
<li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
<li><a href="sparkr.html">SparkR (R on Spark)</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
<li><a href="api/java/index.html">Java</a></li>
<li><a href="api/python/index.html">Python</a></li>
<li><a href="api/R/index.html">R</a></li>
<li><a href="api/sql/index.html">SQL, Built-in Functions</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="cluster-overview.html">Overview</a></li>
<li><a href="submitting-applications.html">Submitting Applications</a></li>
<li class="divider"></li>
<li><a href="spark-standalone.html">Spark Standalone</a></li>
<li><a href="running-on-mesos.html">Mesos</a></li>
<li><a href="running-on-yarn.html">YARN</a></li>
<li><a href="running-on-kubernetes.html">Kubernetes</a></li>
</ul>
</li>
<li class="dropdown">
<a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="configuration.html">Configuration</a></li>
<li><a href="monitoring.html">Monitoring</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="job-scheduling.html">Job Scheduling</a></li>
<li><a href="security.html">Security</a></li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li class="divider"></li>
<li><a href="building-spark.html">Building Spark</a></li>
<li><a href="https://spark.apache.org/contributing.html">Contributing to Spark</a></li>
<li><a href="https://spark.apache.org/third-party-projects.html">Third Party Projects</a></li>
</ul>
</li>
</ul>
<!--<p class="navbar-text pull-right"><span class="version-text">v2.4.4</span></p>-->
</div>
</div>
</div>
<div class="container-wrapper">
<div class="left-menu-wrapper">
<div class="left-menu">
<h3><a href="ml-guide.html">MLlib: Main Guide</a></h3>
<ul>
<li>
<a href="ml-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="ml-datasource">
Data sources
</a>
</li>
<li>
<a href="ml-pipeline.html">
Pipelines
</a>
</li>
<li>
<a href="ml-features.html">
Extracting, transforming and selecting features
</a>
</li>
<li>
<a href="ml-classification-regression.html">
Classification and Regression
</a>
</li>
<li>
<a href="ml-clustering.html">
Clustering
</a>
</li>
<li>
<a href="ml-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="ml-frequent-pattern-mining.html">
Frequent Pattern Mining
</a>
</li>
<li>
<a href="ml-tuning.html">
Model selection and tuning
</a>
</li>
<li>
<a href="ml-advanced.html">
Advanced topics
</a>
</li>
</ul>
<h3><a href="mllib-guide.html">MLlib: RDD-based API Guide</a></h3>
<ul>
<li>
<a href="mllib-data-types.html">
Data types
</a>
</li>
<li>
<a href="mllib-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="mllib-classification-regression.html">
Classification and regression
</a>
</li>
<li>
<a href="mllib-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="mllib-clustering.html">
Clustering
</a>
</li>
<li>
<a href="mllib-dimensionality-reduction.html">
Dimensionality reduction
</a>
</li>
<li>
<a href="mllib-feature-extraction.html">
Feature extraction and transformation
</a>
</li>
<li>
<a href="mllib-frequent-pattern-mining.html">
Frequent pattern mining
</a>
</li>
<li>
<a href="mllib-evaluation-metrics.html">
Evaluation metrics
</a>
</li>
<li>
<a href="mllib-pmml-model-export.html">
PMML model export
</a>
</li>
<li>
<a href="mllib-optimization.html">
Optimization (developer)
</a>
</li>
</ul>
</div>
</div>
<input id="nav-trigger" class="nav-trigger" checked type="checkbox">
<label for="nav-trigger"></label>
<div class="content-with-sidebar" id="content">
<h1 class="title">Machine Learning Library (MLlib) Guide</h1>
<p>MLlib is Spark&#8217;s machine learning (ML) library.
Its goal is to make practical machine learning scalable and easy.
At a high level, it provides tools such as:</p>
<ul>
<li>ML Algorithms: common learning algorithms such as classification, regression, clustering, and collaborative filtering</li>
<li>Featurization: feature extraction, transformation, dimensionality reduction, and selection</li>
<li>Pipelines: tools for constructing, evaluating, and tuning ML Pipelines</li>
<li>Persistence: saving and load algorithms, models, and Pipelines</li>
<li>Utilities: linear algebra, statistics, data handling, etc.</li>
</ul>
<h1 id="announcement-dataframe-based-api-is-primary-api">Announcement: DataFrame-based API is primary API</h1>
<p><strong>The MLlib RDD-based API is now in maintenance mode.</strong></p>
<p>As of Spark 2.0, the <a href="rdd-programming-guide.html#resilient-distributed-datasets-rdds">RDD</a>-based APIs in the <code>spark.mllib</code> package have entered maintenance mode.
The primary Machine Learning API for Spark is now the <a href="sql-programming-guide.html">DataFrame</a>-based API in the <code>spark.ml</code> package.</p>
<p><em>What are the implications?</em></p>
<ul>
<li>MLlib will still support the RDD-based API in <code>spark.mllib</code> with bug fixes.</li>
<li>MLlib will not add new features to the RDD-based API.</li>
<li>In the Spark 2.x releases, MLlib will add features to the DataFrames-based API to reach feature parity with the RDD-based API.</li>
<li>After reaching feature parity (roughly estimated for Spark 2.3), the RDD-based API will be deprecated.</li>
<li>The RDD-based API is expected to be removed in Spark 3.0.</li>
</ul>
<p><em>Why is MLlib switching to the DataFrame-based API?</em></p>
<ul>
<li>DataFrames provide a more user-friendly API than RDDs. The many benefits of DataFrames include Spark Datasources, SQL/DataFrame queries, Tungsten and Catalyst optimizations, and uniform APIs across languages.</li>
<li>The DataFrame-based API for MLlib provides a uniform API across ML algorithms and across multiple languages.</li>
<li>DataFrames facilitate practical ML Pipelines, particularly feature transformations. See the <a href="ml-pipeline.html">Pipelines guide</a> for details.</li>
</ul>
<p><em>What is &#8220;Spark ML&#8221;?</em></p>
<ul>
<li>&#8220;Spark ML&#8221; is not an official name but occasionally used to refer to the MLlib DataFrame-based API.
This is majorly due to the <code>org.apache.spark.ml</code> Scala package name used by the DataFrame-based API,
and the &#8220;Spark ML Pipelines&#8221; term we used initially to emphasize the pipeline concept.</li>
</ul>
<p><em>Is MLlib deprecated?</em></p>
<ul>
<li>No. MLlib includes both the RDD-based API and the DataFrame-based API.
The RDD-based API is now in maintenance mode.
But neither API is deprecated, nor MLlib as a whole.</li>
</ul>
<h1 id="dependencies">Dependencies</h1>
<p>MLlib uses the linear algebra package <a href="http://www.scalanlp.org/">Breeze</a>, which depends on
<a href="https://github.com/fommil/netlib-java">netlib-java</a> for optimised numerical processing.
If native libraries<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> are not available at runtime, you will see a warning message and a pure JVM
implementation will be used instead.</p>
<p>Due to licensing issues with runtime proprietary binaries, we do not include <code>netlib-java</code>&#8217;s native
proxies by default.
To configure <code>netlib-java</code> / Breeze to use system optimised binaries, include
<code>com.github.fommil.netlib:all:1.1.2</code> (or build Spark with <code>-Pnetlib-lgpl</code>) as a dependency of your
project and read the <a href="https://github.com/fommil/netlib-java">netlib-java</a> documentation for your
platform&#8217;s additional installation instructions.</p>
<p>The most popular native BLAS such as <a href="https://software.intel.com/en-us/mkl">Intel MKL</a>, <a href="http://www.openblas.net">OpenBLAS</a>, can use multiple threads in a single operation, which can conflict with Spark&#8217;s execution model.</p>
<p>Configuring these BLAS implementations to use a single thread for operations may actually improve performance (see <a href="https://issues.apache.org/jira/browse/SPARK-21305">SPARK-21305</a>). It is usually optimal to match this to the number of cores each Spark task is configured to use, which is 1 by default and typically left at 1.</p>
<p>Please refer to resources like the following to understand how to configure the number of threads these BLAS implementations use: <a href="https://software.intel.com/en-us/articles/recommended-settings-for-calling-intel-mkl-routines-from-multi-threaded-applications">Intel MKL</a> and <a href="https://github.com/xianyi/OpenBLAS/wiki/faq#multi-threaded">OpenBLAS</a>.</p>
<p>To use MLlib in Python, you will need <a href="http://www.numpy.org">NumPy</a> version 1.4 or newer.</p>
<h1 id="highlights-in-23">Highlights in 2.3</h1>
<p>The list below highlights some of the new features and enhancements added to MLlib in the <code>2.3</code>
release of Spark:</p>
<ul>
<li>Built-in support for reading images into a <code>DataFrame</code> was added
(<a href="https://issues.apache.org/jira/browse/SPARK-21866">SPARK-21866</a>).</li>
<li><a href="ml-features.html#onehotencoderestimator"><code>OneHotEncoderEstimator</code></a> was added, and should be
used instead of the existing <code>OneHotEncoder</code> transformer. The new estimator supports
transforming multiple columns.</li>
<li>Multiple column support was also added to <code>QuantileDiscretizer</code> and <code>Bucketizer</code>
(<a href="https://issues.apache.org/jira/browse/SPARK-22397">SPARK-22397</a> and
<a href="https://issues.apache.org/jira/browse/SPARK-20542">SPARK-20542</a>)</li>
<li>A new <a href="ml-features.html#featurehasher"><code>FeatureHasher</code></a> transformer was added
(<a href="https://issues.apache.org/jira/browse/SPARK-13969">SPARK-13969</a>).</li>
<li>Added support for evaluating multiple models in parallel when performing cross-validation using
<a href="ml-tuning.html"><code>TrainValidationSplit</code> or <code>CrossValidator</code></a>
(<a href="https://issues.apache.org/jira/browse/SPARK-19357">SPARK-19357</a>).</li>
<li>Improved support for custom pipeline components in Python (see
<a href="https://issues.apache.org/jira/browse/SPARK-21633">SPARK-21633</a> and
<a href="https://issues.apache.org/jira/browse/SPARK-21542">SPARK-21542</a>).</li>
<li><code>DataFrame</code> functions for descriptive summary statistics over vector columns
(<a href="https://issues.apache.org/jira/browse/SPARK-19634">SPARK-19634</a>).</li>
<li>Robust linear regression with Huber loss
(<a href="https://issues.apache.org/jira/browse/SPARK-3181">SPARK-3181</a>).</li>
</ul>
<h1 id="migration-guide">Migration guide</h1>
<p>MLlib is under active development.
The APIs marked <code>Experimental</code>/<code>DeveloperApi</code> may change in future releases,
and the migration guide below will explain all changes between releases.</p>
<h2 id="from-22-to-23">From 2.2 to 2.3</h2>
<h3 id="breaking-changes">Breaking changes</h3>
<ul>
<li>The class and trait hierarchy for logistic regression model summaries was changed to be cleaner
and better accommodate the addition of the multi-class summary. This is a breaking change for user
code that casts a <code>LogisticRegressionTrainingSummary</code> to a
<code>BinaryLogisticRegressionTrainingSummary</code>. Users should instead use the <code>model.binarySummary</code>
method. See <a href="https://issues.apache.org/jira/browse/SPARK-17139">SPARK-17139</a> for more detail
(<em>note</em> this is an <code>Experimental</code> API). This <em>does not</em> affect the Python <code>summary</code> method, which
will still work correctly for both multinomial and binary cases.</li>
</ul>
<h3 id="deprecations-and-changes-of-behavior">Deprecations and changes of behavior</h3>
<p><strong>Deprecations</strong></p>
<ul>
<li><code>OneHotEncoder</code> has been deprecated and will be removed in <code>3.0</code>. It has been replaced by the
new <a href="ml-features.html#onehotencoderestimator"><code>OneHotEncoderEstimator</code></a>
(see <a href="https://issues.apache.org/jira/browse/SPARK-13030">SPARK-13030</a>). <strong>Note</strong> that
<code>OneHotEncoderEstimator</code> will be renamed to <code>OneHotEncoder</code> in <code>3.0</code> (but
<code>OneHotEncoderEstimator</code> will be kept as an alias).</li>
</ul>
<p><strong>Changes of behavior</strong></p>
<ul>
<li><a href="https://issues.apache.org/jira/browse/SPARK-21027">SPARK-21027</a>:
The default parallelism used in <code>OneVsRest</code> is now set to 1 (i.e. serial). In <code>2.2</code> and
earlier versions, the level of parallelism was set to the default threadpool size in Scala.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-22156">SPARK-22156</a>:
The learning rate update for <code>Word2Vec</code> was incorrect when <code>numIterations</code> was set greater than
<code>1</code>. This will cause training results to be different between <code>2.3</code> and earlier versions.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-21681">SPARK-21681</a>:
Fixed an edge case bug in multinomial logistic regression that resulted in incorrect coefficients
when some features had zero variance.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-16957">SPARK-16957</a>:
Tree algorithms now use mid-points for split values. This may change results from model training.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-14657">SPARK-14657</a>:
Fixed an issue where the features generated by <code>RFormula</code> without an intercept were inconsistent
with the output in R. This may change results from model training in this scenario.</li>
</ul>
<h2 id="previous-spark-versions">Previous Spark versions</h2>
<p>Earlier migration guides are archived <a href="ml-migration-guides.html">on this page</a>.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p>To learn more about the benefits and background of system optimised natives, you may wish to
watch Sam Halliday&#8217;s ScalaX talk on <a href="http://fommil.github.io/scalax14/#/">High Performance Linear Algebra in Scala</a>.&#160;<a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
</div>
<!-- /container -->
</div>
<script src="js/vendor/jquery-1.12.4.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="js/vendor/anchor.min.js"></script>
<script src="js/main.js"></script>
<!-- MathJax Section -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
</script>
<script>
// Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
// We could use "//cdn.mathjax...", but that won't support "file://".
(function(d, script) {
script = d.createElement('script');
script.type = 'text/javascript';
script.async = true;
script.onload = function(){
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
processEscapes: true,
skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
}
});
};
script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
'cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js' +
'?config=TeX-AMS-MML_HTMLorMML';
d.getElementsByTagName('head')[0].appendChild(script);
}(document));
</script>
</body>
</html>