blob: 977848635b260f0656e5db1ce0037ae70d007797 [file] [log] [blame]
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>MLlib: Main Guide - Spark 2.2.2 Documentation</title>
<link rel="stylesheet" href="css/bootstrap.min.css">
<style>
body {
padding-top: 60px;
padding-bottom: 40px;
}
</style>
<meta name="viewport" content="width=device-width">
<link rel="stylesheet" href="css/bootstrap-responsive.min.css">
<link rel="stylesheet" href="css/main.css">
<script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
<link rel="stylesheet" href="css/pygments-default.css">
<!-- Google analytics script -->
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-32518208-2']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body>
<!--[if lt IE 7]>
<p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
<![endif]-->
<!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
<div class="navbar navbar-fixed-top" id="topbar">
<div class="navbar-inner">
<div class="container">
<div class="brand"><a href="index.html">
<img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">2.2.2</span>
</div>
<ul class="nav">
<!--TODO(andyk): Add class="active" attribute to li some how.-->
<li><a href="index.html">Overview</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="quick-start.html">Quick Start</a></li>
<li><a href="rdd-programming-guide.html">RDDs, Accumulators, Broadcasts Vars</a></li>
<li><a href="sql-programming-guide.html">SQL, DataFrames, and Datasets</a></li>
<li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li>
<li><a href="streaming-programming-guide.html">Spark Streaming (DStreams)</a></li>
<li><a href="ml-guide.html">MLlib (Machine Learning)</a></li>
<li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
<li><a href="sparkr.html">SparkR (R on Spark)</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
<li><a href="api/java/index.html">Java</a></li>
<li><a href="api/python/index.html">Python</a></li>
<li><a href="api/R/index.html">R</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="cluster-overview.html">Overview</a></li>
<li><a href="submitting-applications.html">Submitting Applications</a></li>
<li class="divider"></li>
<li><a href="spark-standalone.html">Spark Standalone</a></li>
<li><a href="running-on-mesos.html">Mesos</a></li>
<li><a href="running-on-yarn.html">YARN</a></li>
</ul>
</li>
<li class="dropdown">
<a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="configuration.html">Configuration</a></li>
<li><a href="monitoring.html">Monitoring</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="job-scheduling.html">Job Scheduling</a></li>
<li><a href="security.html">Security</a></li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li class="divider"></li>
<li><a href="building-spark.html">Building Spark</a></li>
<li><a href="http://spark.apache.org/contributing.html">Contributing to Spark</a></li>
<li><a href="http://spark.apache.org/third-party-projects.html">Third Party Projects</a></li>
</ul>
</li>
</ul>
<!--<p class="navbar-text pull-right"><span class="version-text">v2.2.2</span></p>-->
</div>
</div>
</div>
<div class="container-wrapper">
<div class="left-menu-wrapper">
<div class="left-menu">
<h3><a href="ml-guide.html">MLlib: Main Guide</a></h3>
<ul>
<li>
<a href="ml-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="ml-pipeline.html">
Pipelines
</a>
</li>
<li>
<a href="ml-features.html">
Extracting, transforming and selecting features
</a>
</li>
<li>
<a href="ml-classification-regression.html">
Classification and Regression
</a>
</li>
<li>
<a href="ml-clustering.html">
Clustering
</a>
</li>
<li>
<a href="ml-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="ml-frequent-pattern-mining.html">
Frequent Pattern Mining
</a>
</li>
<li>
<a href="ml-tuning.html">
Model selection and tuning
</a>
</li>
<li>
<a href="ml-advanced.html">
Advanced topics
</a>
</li>
</ul>
<h3><a href="mllib-guide.html">MLlib: RDD-based API Guide</a></h3>
<ul>
<li>
<a href="mllib-data-types.html">
Data types
</a>
</li>
<li>
<a href="mllib-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="mllib-classification-regression.html">
Classification and regression
</a>
</li>
<li>
<a href="mllib-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="mllib-clustering.html">
Clustering
</a>
</li>
<li>
<a href="mllib-dimensionality-reduction.html">
Dimensionality reduction
</a>
</li>
<li>
<a href="mllib-feature-extraction.html">
Feature extraction and transformation
</a>
</li>
<li>
<a href="mllib-frequent-pattern-mining.html">
Frequent pattern mining
</a>
</li>
<li>
<a href="mllib-evaluation-metrics.html">
Evaluation metrics
</a>
</li>
<li>
<a href="mllib-pmml-model-export.html">
PMML model export
</a>
</li>
<li>
<a href="mllib-optimization.html">
Optimization (developer)
</a>
</li>
</ul>
</div>
</div>
<input id="nav-trigger" class="nav-trigger" checked type="checkbox">
<label for="nav-trigger"></label>
<div class="content-with-sidebar" id="content">
<h1 class="title">Machine Learning Library (MLlib) Guide</h1>
<p>MLlib is Spark&#8217;s machine learning (ML) library.
Its goal is to make practical machine learning scalable and easy.
At a high level, it provides tools such as:</p>
<ul>
<li>ML Algorithms: common learning algorithms such as classification, regression, clustering, and collaborative filtering</li>
<li>Featurization: feature extraction, transformation, dimensionality reduction, and selection</li>
<li>Pipelines: tools for constructing, evaluating, and tuning ML Pipelines</li>
<li>Persistence: saving and load algorithms, models, and Pipelines</li>
<li>Utilities: linear algebra, statistics, data handling, etc.</li>
</ul>
<h1 id="announcement-dataframe-based-api-is-primary-api">Announcement: DataFrame-based API is primary API</h1>
<p><strong>The MLlib RDD-based API is now in maintenance mode.</strong></p>
<p>As of Spark 2.0, the <a href="rdd-programming-guide.html#resilient-distributed-datasets-rdds">RDD</a>-based APIs in the <code>spark.mllib</code> package have entered maintenance mode.
The primary Machine Learning API for Spark is now the <a href="sql-programming-guide.html">DataFrame</a>-based API in the <code>spark.ml</code> package.</p>
<p><em>What are the implications?</em></p>
<ul>
<li>MLlib will still support the RDD-based API in <code>spark.mllib</code> with bug fixes.</li>
<li>MLlib will not add new features to the RDD-based API.</li>
<li>In the Spark 2.x releases, MLlib will add features to the DataFrames-based API to reach feature parity with the RDD-based API.</li>
<li>After reaching feature parity (roughly estimated for Spark 2.3), the RDD-based API will be deprecated.</li>
<li>The RDD-based API is expected to be removed in Spark 3.0.</li>
</ul>
<p><em>Why is MLlib switching to the DataFrame-based API?</em></p>
<ul>
<li>DataFrames provide a more user-friendly API than RDDs. The many benefits of DataFrames include Spark Datasources, SQL/DataFrame queries, Tungsten and Catalyst optimizations, and uniform APIs across languages.</li>
<li>The DataFrame-based API for MLlib provides a uniform API across ML algorithms and across multiple languages.</li>
<li>DataFrames facilitate practical ML Pipelines, particularly feature transformations. See the <a href="ml-pipeline.html">Pipelines guide</a> for details.</li>
</ul>
<p><em>What is &#8220;Spark ML&#8221;?</em></p>
<ul>
<li>&#8220;Spark ML&#8221; is not an official name but occasionally used to refer to the MLlib DataFrame-based API.
This is majorly due to the <code>org.apache.spark.ml</code> Scala package name used by the DataFrame-based API,
and the &#8220;Spark ML Pipelines&#8221; term we used initially to emphasize the pipeline concept.</li>
</ul>
<p><em>Is MLlib deprecated?</em></p>
<ul>
<li>No. MLlib includes both the RDD-based API and the DataFrame-based API.
The RDD-based API is now in maintenance mode.
But neither API is deprecated, nor MLlib as a whole.</li>
</ul>
<h1 id="dependencies">Dependencies</h1>
<p>MLlib uses the linear algebra package <a href="http://www.scalanlp.org/">Breeze</a>, which depends on
<a href="https://github.com/fommil/netlib-java">netlib-java</a> for optimised numerical processing.
If native libraries<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> are not available at runtime, you will see a warning message and a pure JVM
implementation will be used instead.</p>
<p>Due to licensing issues with runtime proprietary binaries, we do not include <code>netlib-java</code>&#8217;s native
proxies by default.
To configure <code>netlib-java</code> / Breeze to use system optimised binaries, include
<code>com.github.fommil.netlib:all:1.1.2</code> (or build Spark with <code>-Pnetlib-lgpl</code>) as a dependency of your
project and read the <a href="https://github.com/fommil/netlib-java">netlib-java</a> documentation for your
platform&#8217;s additional installation instructions.</p>
<p>To use MLlib in Python, you will need <a href="http://www.numpy.org">NumPy</a> version 1.4 or newer.</p>
<h1 id="highlights-in-22">Highlights in 2.2</h1>
<p>The list below highlights some of the new features and enhancements added to MLlib in the <code>2.2</code>
release of Spark:</p>
<ul>
<li><a href="ml-collaborative-filtering.html"><code>ALS</code></a> methods for <em>top-k</em> recommendations for all
users or items, matching the functionality in <code>mllib</code>
(<a href="https://issues.apache.org/jira/browse/SPARK-19535">SPARK-19535</a>).
Performance was also improved for both <code>ml</code> and <code>mllib</code>
(<a href="https://issues.apache.org/jira/browse/SPARK-11968">SPARK-11968</a> and
<a href="https://issues.apache.org/jira/browse/SPARK-20587">SPARK-20587</a>)</li>
<li><a href="ml-statistics.html#correlation"><code>Correlation</code></a> and
<a href="ml-statistics.html#hypothesis-testing"><code>ChiSquareTest</code></a> stats functions for <code>DataFrames</code>
(<a href="https://issues.apache.org/jira/browse/SPARK-19636">SPARK-19636</a> and
<a href="https://issues.apache.org/jira/browse/SPARK-19635">SPARK-19635</a>)</li>
<li><a href="ml-frequent-pattern-mining.html#fp-growth"><code>FPGrowth</code></a> algorithm for frequent pattern mining
(<a href="https://issues.apache.org/jira/browse/SPARK-14503">SPARK-14503</a>)</li>
<li><code>GLM</code> now supports the full <code>Tweedie</code> family
(<a href="https://issues.apache.org/jira/browse/SPARK-18929">SPARK-18929</a>)</li>
<li><a href="ml-features.html#imputer"><code>Imputer</code></a> feature transformer to impute missing values in a dataset
(<a href="https://issues.apache.org/jira/browse/SPARK-13568">SPARK-13568</a>)</li>
<li><a href="ml-classification-regression.html#linear-support-vector-machine"><code>LinearSVC</code></a>
for linear Support Vector Machine classification
(<a href="https://issues.apache.org/jira/browse/SPARK-14709">SPARK-14709</a>)</li>
<li>Logistic regression now supports constraints on the coefficients during training
(<a href="https://issues.apache.org/jira/browse/SPARK-20047">SPARK-20047</a>)</li>
</ul>
<h1 id="migration-guide">Migration guide</h1>
<p>MLlib is under active development.
The APIs marked <code>Experimental</code>/<code>DeveloperApi</code> may change in future releases,
and the migration guide below will explain all changes between releases.</p>
<h2 id="from-21-to-22">From 2.1 to 2.2</h2>
<h3 id="breaking-changes">Breaking changes</h3>
<p>There are no breaking changes.</p>
<h3 id="deprecations-and-changes-of-behavior">Deprecations and changes of behavior</h3>
<p><strong>Deprecations</strong></p>
<p>There are no deprecations.</p>
<p><strong>Changes of behavior</strong></p>
<ul>
<li><a href="https://issues.apache.org/jira/browse/SPARK-19787">SPARK-19787</a>:
Default value of <code>regParam</code> changed from <code>1.0</code> to <code>0.1</code> for <code>ALS.train</code> method (marked <code>DeveloperApi</code>).
<strong>Note</strong> this does <em>not affect</em> the <code>ALS</code> Estimator or Model, nor MLlib&#8217;s <code>ALS</code> class.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-14772">SPARK-14772</a>:
Fixed inconsistency between Python and Scala APIs for <code>Param.copy</code> method.</li>
<li><a href="https://issues.apache.org/jira/browse/SPARK-11569">SPARK-11569</a>:
<code>StringIndexer</code> now handles <code>NULL</code> values in the same way as unseen values. Previously an exception
would always be thrown regardless of the setting of the <code>handleInvalid</code> parameter.</li>
</ul>
<h2 id="previous-spark-versions">Previous Spark versions</h2>
<p>Earlier migration guides are archived <a href="ml-migration-guides.html">on this page</a>.</p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:1">
<p>To learn more about the benefits and background of system optimised natives, you may wish to
watch Sam Halliday&#8217;s ScalaX talk on <a href="http://fommil.github.io/scalax14/#/">High Performance Linear Algebra in Scala</a>.&#160;<a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
</div>
<!-- /container -->
</div>
<script src="js/vendor/jquery-1.8.0.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="js/vendor/anchor.min.js"></script>
<script src="js/main.js"></script>
<!-- MathJax Section -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
</script>
<script>
// Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
// We could use "//cdn.mathjax...", but that won't support "file://".
(function(d, script) {
script = d.createElement('script');
script.type = 'text/javascript';
script.async = true;
script.onload = function(){
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
processEscapes: true,
skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
}
});
};
script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
d.getElementsByTagName('head')[0].appendChild(script);
}(document));
</script>
</body>
</html>