blob: 7b12914a9034030f4038d0910877951e5032b320 [file] [log] [blame]
<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Optimization - RDD-based API - Spark 3.0.1 Documentation</title>
<link rel="stylesheet" href="css/bootstrap.min.css">
<style>
body {
padding-top: 60px;
padding-bottom: 40px;
}
</style>
<meta name="viewport" content="width=device-width">
<link rel="stylesheet" href="css/bootstrap-responsive.min.css">
<link rel="stylesheet" href="css/main.css">
<script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
<link rel="stylesheet" href="css/pygments-default.css">
<!-- Google analytics script -->
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-32518208-2']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body>
<!--[if lt IE 7]>
<p class="chromeframe">You are using an outdated browser. <a href="https://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p>
<![endif]-->
<!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html -->
<div class="navbar navbar-fixed-top" id="topbar">
<div class="navbar-inner">
<div class="container">
<div class="brand"><a href="index.html">
<img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">3.0.1</span>
</div>
<ul class="nav">
<!--TODO(andyk): Add class="active" attribute to li some how.-->
<li><a href="index.html">Overview</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="quick-start.html">Quick Start</a></li>
<li><a href="rdd-programming-guide.html">RDDs, Accumulators, Broadcasts Vars</a></li>
<li><a href="sql-programming-guide.html">SQL, DataFrames, and Datasets</a></li>
<li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li>
<li><a href="streaming-programming-guide.html">Spark Streaming (DStreams)</a></li>
<li><a href="ml-guide.html">MLlib (Machine Learning)</a></li>
<li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li>
<li><a href="sparkr.html">SparkR (R on Spark)</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="api/scala/org/apache/spark/index.html">Scala</a></li>
<li><a href="api/java/index.html">Java</a></li>
<li><a href="api/python/index.html">Python</a></li>
<li><a href="api/R/index.html">R</a></li>
<li><a href="api/sql/index.html">SQL, Built-in Functions</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="cluster-overview.html">Overview</a></li>
<li><a href="submitting-applications.html">Submitting Applications</a></li>
<li class="divider"></li>
<li><a href="spark-standalone.html">Spark Standalone</a></li>
<li><a href="running-on-mesos.html">Mesos</a></li>
<li><a href="running-on-yarn.html">YARN</a></li>
<li><a href="running-on-kubernetes.html">Kubernetes</a></li>
</ul>
</li>
<li class="dropdown">
<a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="configuration.html">Configuration</a></li>
<li><a href="monitoring.html">Monitoring</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="job-scheduling.html">Job Scheduling</a></li>
<li><a href="security.html">Security</a></li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li><a href="migration-guide.html">Migration Guide</a></li>
<li class="divider"></li>
<li><a href="building-spark.html">Building Spark</a></li>
<li><a href="https://spark.apache.org/contributing.html">Contributing to Spark</a></li>
<li><a href="https://spark.apache.org/third-party-projects.html">Third Party Projects</a></li>
</ul>
</li>
</ul>
<!--<p class="navbar-text pull-right"><span class="version-text">v3.0.1</span></p>-->
</div>
</div>
</div>
<div class="container-wrapper">
<div class="left-menu-wrapper">
<div class="left-menu">
<h3><a href="ml-guide.html">MLlib: Main Guide</a></h3>
<ul>
<li>
<a href="ml-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="ml-datasource.html">
Data sources
</a>
</li>
<li>
<a href="ml-pipeline.html">
Pipelines
</a>
</li>
<li>
<a href="ml-features.html">
Extracting, transforming and selecting features
</a>
</li>
<li>
<a href="ml-classification-regression.html">
Classification and Regression
</a>
</li>
<li>
<a href="ml-clustering.html">
Clustering
</a>
</li>
<li>
<a href="ml-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="ml-frequent-pattern-mining.html">
Frequent Pattern Mining
</a>
</li>
<li>
<a href="ml-tuning.html">
Model selection and tuning
</a>
</li>
<li>
<a href="ml-advanced.html">
Advanced topics
</a>
</li>
</ul>
<h3><a href="mllib-guide.html">MLlib: RDD-based API Guide</a></h3>
<ul>
<li>
<a href="mllib-data-types.html">
Data types
</a>
</li>
<li>
<a href="mllib-statistics.html">
Basic statistics
</a>
</li>
<li>
<a href="mllib-classification-regression.html">
Classification and regression
</a>
</li>
<li>
<a href="mllib-collaborative-filtering.html">
Collaborative filtering
</a>
</li>
<li>
<a href="mllib-clustering.html">
Clustering
</a>
</li>
<li>
<a href="mllib-dimensionality-reduction.html">
Dimensionality reduction
</a>
</li>
<li>
<a href="mllib-feature-extraction.html">
Feature extraction and transformation
</a>
</li>
<li>
<a href="mllib-frequent-pattern-mining.html">
Frequent pattern mining
</a>
</li>
<li>
<a href="mllib-evaluation-metrics.html">
Evaluation metrics
</a>
</li>
<li>
<a href="mllib-pmml-model-export.html">
PMML model export
</a>
</li>
<li>
<a href="mllib-optimization.html">
<b>Optimization (developer)</b>
</a>
</li>
<ul>
<li>
<a href="mllib-optimization.html#stochastic-gradient-descent-sgd">
stochastic gradient descent
</a>
</li>
<li>
<a href="mllib-optimization.html#limited-memory-bfgs-l-bfgs">
limited-memory BFGS (L-BFGS)
</a>
</li>
</ul>
</ul>
</div>
</div>
<input id="nav-trigger" class="nav-trigger" checked type="checkbox">
<label for="nav-trigger"></label>
<div class="content-with-sidebar" id="content">
<h1 class="title">Optimization - RDD-based API</h1>
<ul id="markdown-toc">
<li><a href="#mathematical-description" id="markdown-toc-mathematical-description">Mathematical description</a> <ul>
<li><a href="#gradient-descent" id="markdown-toc-gradient-descent">Gradient descent</a></li>
<li><a href="#stochastic-gradient-descent-sgd" id="markdown-toc-stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</a></li>
<li><a href="#update-schemes-for-distributed-sgd" id="markdown-toc-update-schemes-for-distributed-sgd">Update schemes for distributed SGD</a></li>
<li><a href="#limited-memory-bfgs-l-bfgs" id="markdown-toc-limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</a></li>
<li><a href="#choosing-an-optimization-method" id="markdown-toc-choosing-an-optimization-method">Choosing an Optimization Method</a></li>
</ul>
</li>
<li><a href="#implementation-in-mllib" id="markdown-toc-implementation-in-mllib">Implementation in MLlib</a> <ul>
<li><a href="#gradient-descent-and-stochastic-gradient-descent" id="markdown-toc-gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</a></li>
<li><a href="#l-bfgs" id="markdown-toc-l-bfgs">L-BFGS</a></li>
</ul>
</li>
<li><a href="#developers-notes" id="markdown-toc-developers-notes">Developer&#8217;s notes</a></li>
</ul>
<p><code class="highlighter-rouge">\[
\newcommand{\R}{\mathbb{R}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\x}{\mathbf{x}}
\newcommand{\y}{\mathbf{y}}
\newcommand{\wv}{\mathbf{w}}
\newcommand{\av}{\mathbf{\alpha}}
\newcommand{\bv}{\mathbf{b}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\id}{\mathbf{I}}
\newcommand{\ind}{\mathbf{1}}
\newcommand{\0}{\mathbf{0}}
\newcommand{\unit}{\mathbf{e}}
\newcommand{\one}{\mathbf{1}}
\newcommand{\zero}{\mathbf{0}}
\]</code></p>
<h2 id="mathematical-description">Mathematical description</h2>
<h3 id="gradient-descent">Gradient descent</h3>
<p>The simplest method to solve optimization problems of the form <code class="highlighter-rouge">$\min_{\wv \in\R^d} \; f(\wv)$</code>
is <a href="http://en.wikipedia.org/wiki/Gradient_descent">gradient descent</a>.
Such first-order optimization methods (including gradient descent and stochastic variants
thereof) are well-suited for large-scale and distributed computation.</p>
<p>Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in
the direction of steepest descent, which is the negative of the derivative (called the
<a href="http://en.wikipedia.org/wiki/Gradient">gradient</a>) of the function at the current point, i.e., at
the current parameter value.
If the objective function <code class="highlighter-rouge">$f$</code> is not differentiable at all arguments, but still convex, then a
<em>sub-gradient</em>
is the natural generalization of the gradient, and assumes the role of the step direction.
In any case, computing a gradient or sub-gradient of <code class="highlighter-rouge">$f$</code> is expensive &#8212; it requires a full
pass through the complete dataset, in order to compute the contributions from all loss terms.</p>
<h3 id="stochastic-gradient-descent-sgd">Stochastic gradient descent (SGD)</h3>
<p>Optimization problems whose objective function <code class="highlighter-rouge">$f$</code> is written as a sum are particularly
suitable to be solved using <em>stochastic gradient descent (SGD)</em>.
In our case, for the optimization formulations commonly used in <a href="mllib-classification-regression.html">supervised machine learning</a>,
<code class="highlighter-rouge">\begin{equation}
f(\wv) :=
\lambda\, R(\wv) +
\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)
\label{eq:regPrimal}
\ .
\end{equation}</code>
this is especially natural, because the loss is written as an average of the individual losses
coming from each datapoint.</p>
<p>A stochastic subgradient is a randomized choice of a vector, such that in expectation, we obtain
a true subgradient of the original objective function.
Picking one datapoint <code class="highlighter-rouge">$i\in[1..n]$</code> uniformly at random, we obtain a stochastic subgradient of
<code class="highlighter-rouge">$\eqref{eq:regPrimal}$</code>, with respect to <code class="highlighter-rouge">$\wv$</code> as follows:
<code class="highlighter-rouge">\[
f'_{\wv,i} := L'_{\wv,i} + \lambda\, R'_\wv \ ,
\]</code>
where <code class="highlighter-rouge">$L'_{\wv,i} \in \R^d$</code> is a subgradient of the part of the loss function determined by the
<code class="highlighter-rouge">$i$</code>-th datapoint, that is <code class="highlighter-rouge">$L'_{\wv,i} \in \frac{\partial}{\partial \wv} L(\wv;\x_i,y_i)$</code>.
Furthermore, <code class="highlighter-rouge">$R'_\wv$</code> is a subgradient of the regularizer <code class="highlighter-rouge">$R(\wv)$</code>, i.e. <code class="highlighter-rouge">$R'_\wv \in
\frac{\partial}{\partial \wv} R(\wv)$</code>. The term <code class="highlighter-rouge">$R'_\wv$</code> does not depend on which random
datapoint is picked.
Clearly, in expectation over the random choice of <code class="highlighter-rouge">$i\in[1..n]$</code>, we have that <code class="highlighter-rouge">$f'_{\wv,i}$</code> is
a subgradient of the original objective <code class="highlighter-rouge">$f$</code>, meaning that <code class="highlighter-rouge">$\E\left[f'_{\wv,i}\right] \in
\frac{\partial}{\partial \wv} f(\wv)$</code>.</p>
<p>Running SGD now simply becomes walking in the direction of the negative stochastic subgradient
<code class="highlighter-rouge">$f'_{\wv,i}$</code>, that is
<code class="highlighter-rouge">\begin{equation}\label{eq:SGDupdate}
\wv^{(t+1)} := \wv^{(t)} - \gamma \; f'_{\wv,i} \ .
\end{equation}</code>
<strong>Step-size.</strong>
The parameter <code class="highlighter-rouge">$\gamma$</code> is the step-size, which in the default implementation is chosen
decreasing with the square root of the iteration counter, i.e. <code class="highlighter-rouge">$\gamma := \frac{s}{\sqrt{t}}$</code>
in the <code class="highlighter-rouge">$t$</code>-th iteration, with the input parameter <code class="highlighter-rouge">$s=$ stepSize</code>. Note that selecting the best
step-size for SGD methods can often be delicate in practice and is a topic of active research.</p>
<p><strong>Gradients.</strong>
A table of (sub)gradients of the machine learning methods implemented in <code class="highlighter-rouge">spark.mllib</code>, is available in
the <a href="mllib-classification-regression.html">classification and regression</a> section.</p>
<p><strong>Proximal Updates.</strong>
As an alternative to just use the subgradient <code class="highlighter-rouge">$R'(\wv)$</code> of the regularizer in the step
direction, an improved update for some cases can be obtained by using the proximal operator
instead.
For the L1-regularizer, the proximal operator is given by soft thresholding, as implemented in
<a href="api/scala/org/apache/spark/mllib/optimization/L1Updater.html">L1Updater</a>.</p>
<h3 id="update-schemes-for-distributed-sgd">Update schemes for distributed SGD</h3>
<p>The SGD implementation in
<a href="api/scala/org/apache/spark/mllib/optimization/GradientDescent.html">GradientDescent</a> uses
a simple (distributed) sampling of the data examples.
We recall that the loss part of the optimization problem <code class="highlighter-rouge">$\eqref{eq:regPrimal}$</code> is
<code class="highlighter-rouge">$\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)$</code>, and therefore <code class="highlighter-rouge">$\frac1n \sum_{i=1}^n L'_{\wv,i}$</code> would
be the true (sub)gradient.
Since this would require access to the full data set, the parameter <code class="highlighter-rouge">miniBatchFraction</code> specifies
which fraction of the full data to use instead.
The average of the gradients over this subset, i.e.
<code class="highlighter-rouge">\[
\frac1{|S|} \sum_{i\in S} L'_{\wv,i} \ ,
\]</code>
is a stochastic gradient. Here <code class="highlighter-rouge">$S$</code> is the sampled subset of size <code class="highlighter-rouge">$|S|=$ miniBatchFraction
$\cdot n$</code>.</p>
<p>In each iteration, the sampling over the distributed dataset
(<a href="rdd-programming-guide.html#resilient-distributed-datasets-rdds">RDD</a>), as well as the
computation of the sum of the partial results from each worker machine is performed by the
standard spark routines.</p>
<p>If the fraction of points <code class="highlighter-rouge">miniBatchFraction</code> is set to 1 (default), then the resulting step in
each iteration is exact (sub)gradient descent. In this case, there is no randomness and no
variance in the used step directions.
On the other extreme, if <code class="highlighter-rouge">miniBatchFraction</code> is chosen very small, such that only a single point
is sampled, i.e. <code class="highlighter-rouge">$|S|=$ miniBatchFraction $\cdot n = 1$</code>, then the algorithm is equivalent to
standard SGD. In that case, the step direction depends from the uniformly random sampling of the
point.</p>
<h3 id="limited-memory-bfgs-l-bfgs">Limited-memory BFGS (L-BFGS)</h3>
<p><a href="http://en.wikipedia.org/wiki/Limited-memory_BFGS">L-BFGS</a> is an optimization
algorithm in the family of quasi-Newton methods to solve the optimization problems of the form
<code class="highlighter-rouge">$\min_{\wv \in\R^d} \; f(\wv)$</code>. The L-BFGS method approximates the objective function locally as a
quadratic without evaluating the second partial derivatives of the objective function to construct the
Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations, so there is no
vertical scalability issue (the number of training features) when computing the Hessian matrix
explicitly in Newton&#8217;s method. As a result, L-BFGS often achieves more rapid convergence compared with
other first-order optimization.</p>
<h3 id="choosing-an-optimization-method">Choosing an Optimization Method</h3>
<p><a href="mllib-linear-methods.html">Linear methods</a> use optimization internally, and some linear methods in <code class="highlighter-rouge">spark.mllib</code> support both SGD and L-BFGS.
Different optimization methods can have different convergence guarantees depending on the properties of the objective function, and we cannot cover the literature here.
In general, when L-BFGS is available, we recommend using it instead of SGD since L-BFGS tends to converge faster (in fewer iterations).</p>
<h2 id="implementation-in-mllib">Implementation in MLlib</h2>
<h3 id="gradient-descent-and-stochastic-gradient-descent">Gradient descent and stochastic gradient descent</h3>
<p>Gradient descent methods including stochastic subgradient descent (SGD) as
included as a low-level primitive in <code class="highlighter-rouge">MLlib</code>, upon which various ML algorithms
are developed, see the
<a href="mllib-linear-methods.html">linear methods</a>
section for example.</p>
<p>The SGD class
<a href="api/scala/org/apache/spark/mllib/optimization/GradientDescent.html">GradientDescent</a>
sets the following parameters:</p>
<ul>
<li><code class="highlighter-rouge">Gradient</code> is a class that computes the stochastic gradient of the function
being optimized, i.e., with respect to a single training example, at the
current parameter value. MLlib includes gradient classes for common loss
functions, e.g., hinge, logistic, least-squares. The gradient class takes as
input a training example, its label, and the current parameter value.</li>
<li><code class="highlighter-rouge">Updater</code> is a class that performs the actual gradient descent step, i.e.
updating the weights in each iteration, for a given gradient of the loss part.
The updater is also responsible to perform the update from the regularization
part. MLlib includes updaters for cases without regularization, as well as
L1 and L2 regularizers.</li>
<li><code class="highlighter-rouge">stepSize</code> is a scalar value denoting the initial step size for gradient
descent. All updaters in MLlib use a step size at the t-th step equal to
<code class="highlighter-rouge">stepSize $/ \sqrt{t}$</code>.</li>
<li><code class="highlighter-rouge">numIterations</code> is the number of iterations to run.</li>
<li><code class="highlighter-rouge">regParam</code> is the regularization parameter when using L1 or L2 regularization.</li>
<li><code class="highlighter-rouge">miniBatchFraction</code> is the fraction of the total data that is sampled in
each iteration, to compute the gradient direction.
<ul>
<li>Sampling still requires a pass over the entire RDD, so decreasing <code class="highlighter-rouge">miniBatchFraction</code> may not speed up optimization much. Users will see the greatest speedup when the gradient is expensive to compute, for only the chosen samples are used for computing the gradient.</li>
</ul>
</li>
</ul>
<h3 id="l-bfgs">L-BFGS</h3>
<p>L-BFGS is currently only a low-level optimization primitive in <code class="highlighter-rouge">MLlib</code>. If you want to use L-BFGS in various
ML algorithms such as Linear Regression, and Logistic Regression, you have to pass the gradient of objective
function, and updater into optimizer yourself instead of using the training APIs like
<a href="api/scala/org/apache/spark/mllib/classification/LogisticRegressionWithSGD.html">LogisticRegressionWithSGD</a>.
See the example below. It will be addressed in the next release.</p>
<p>The L1 regularization by using
<a href="api/scala/org/apache/spark/mllib/optimization/L1Updater.html">L1Updater</a> will not work since the
soft-thresholding logic in L1Updater is designed for gradient descent. See the developer&#8217;s note.</p>
<p>The L-BFGS method
<a href="api/scala/org/apache/spark/mllib/optimization/LBFGS.html">LBFGS.runLBFGS</a>
has the following parameters:</p>
<ul>
<li><code class="highlighter-rouge">Gradient</code> is a class that computes the gradient of the objective function
being optimized, i.e., with respect to a single training example, at the
current parameter value. MLlib includes gradient classes for common loss
functions, e.g., hinge, logistic, least-squares. The gradient class takes as
input a training example, its label, and the current parameter value.</li>
<li><code class="highlighter-rouge">Updater</code> is a class that computes the gradient and loss of objective function
of the regularization part for L-BFGS. MLlib includes updaters for cases without
regularization, as well as L2 regularizer.</li>
<li><code class="highlighter-rouge">numCorrections</code> is the number of corrections used in the L-BFGS update. 10 is
recommended.</li>
<li><code class="highlighter-rouge">maxNumIterations</code> is the maximal number of iterations that L-BFGS can be run.</li>
<li><code class="highlighter-rouge">regParam</code> is the regularization parameter when using regularization.</li>
<li><code class="highlighter-rouge">convergenceTol</code> controls how much relative change is still allowed when L-BFGS
is considered to converge. This must be nonnegative. Lower values are less tolerant and
therefore generally cause more iterations to be run. This value looks at both average
improvement and the norm of gradient inside <a href="https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala">Breeze LBFGS</a>.</li>
</ul>
<p>The <code class="highlighter-rouge">return</code> is a tuple containing two elements. The first element is a column matrix
containing weights for every feature, and the second element is an array containing
the loss computed for every iteration.</p>
<p>Here is an example to train binary logistic regression with L2 regularization using
L-BFGS optimizer.</p>
<div class="codetabs">
<div data-lang="scala">
<p>Refer to the <a href="api/scala/org/apache/spark/mllib/optimization/LBFGS.html"><code class="highlighter-rouge">LBFGS</code> Scala docs</a> and <a href="api/scala/org/apache/spark/mllib/optimization/SquaredL2Updater.html"><code class="highlighter-rouge">SquaredL2Updater</code> Scala docs</a> for details on the API.</p>
<div class="highlight"><pre class="codehilite"><code><span class="k">import</span> <span class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span>
<span class="k">import</span> <span class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span>
<span class="k">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span>
<span class="k">import</span> <span class="nn">org.apache.spark.mllib.optimization.</span><span class="o">{</span><span class="nc">LBFGS</span><span class="o">,</span> <span class="nc">LogisticGradient</span><span class="o">,</span> <span class="nc">SquaredL2Updater</span><span class="o">}</span>
<span class="k">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span>
<span class="k">val</span> <span class="nv">data</span> <span class="k">=</span> <span class="nv">MLUtils</span><span class="o">.</span><span class="py">loadLibSVMFile</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="s">"data/mllib/sample_libsvm_data.txt"</span><span class="o">)</span>
<span class="k">val</span> <span class="nv">numFeatures</span> <span class="k">=</span> <span class="nv">data</span><span class="o">.</span><span class="py">take</span><span class="o">(</span><span class="mi">1</span><span class="o">)(</span><span class="mi">0</span><span class="o">).</span><span class="py">features</span><span class="o">.</span><span class="py">size</span>
<span class="c1">// Split data into training (60%) and test (40%).</span>
<span class="k">val</span> <span class="nv">splits</span> <span class="k">=</span> <span class="nv">data</span><span class="o">.</span><span class="py">randomSplit</span><span class="o">(</span><span class="nc">Array</span><span class="o">(</span><span class="mf">0.6</span><span class="o">,</span> <span class="mf">0.4</span><span class="o">),</span> <span class="n">seed</span> <span class="k">=</span> <span class="mi">11L</span><span class="o">)</span>
<span class="c1">// Append 1 into the training data as intercept.</span>
<span class="k">val</span> <span class="nv">training</span> <span class="k">=</span> <span class="nf">splits</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="py">map</span><span class="o">(</span><span class="n">x</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="nv">x</span><span class="o">.</span><span class="py">label</span><span class="o">,</span> <span class="nv">MLUtils</span><span class="o">.</span><span class="py">appendBias</span><span class="o">(</span><span class="nv">x</span><span class="o">.</span><span class="py">features</span><span class="o">))).</span><span class="py">cache</span><span class="o">()</span>
<span class="k">val</span> <span class="nv">test</span> <span class="k">=</span> <span class="nf">splits</span><span class="o">(</span><span class="mi">1</span><span class="o">)</span>
<span class="c1">// Run training algorithm to build the model</span>
<span class="k">val</span> <span class="nv">numCorrections</span> <span class="k">=</span> <span class="mi">10</span>
<span class="k">val</span> <span class="nv">convergenceTol</span> <span class="k">=</span> <span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span>
<span class="k">val</span> <span class="nv">maxNumIterations</span> <span class="k">=</span> <span class="mi">20</span>
<span class="k">val</span> <span class="nv">regParam</span> <span class="k">=</span> <span class="mf">0.1</span>
<span class="k">val</span> <span class="nv">initialWeightsWithIntercept</span> <span class="k">=</span> <span class="nv">Vectors</span><span class="o">.</span><span class="py">dense</span><span class="o">(</span><span class="k">new</span> <span class="nc">Array</span><span class="o">[</span><span class="kt">Double</span><span class="o">](</span><span class="n">numFeatures</span> <span class="o">+</span> <span class="mi">1</span><span class="o">))</span>
<span class="nf">val</span> <span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">,</span> <span class="n">loss</span><span class="o">)</span> <span class="k">=</span> <span class="nv">LBFGS</span><span class="o">.</span><span class="py">runLBFGS</span><span class="o">(</span>
<span class="n">training</span><span class="o">,</span>
<span class="k">new</span> <span class="nc">LogisticGradient</span><span class="o">(),</span>
<span class="k">new</span> <span class="nc">SquaredL2Updater</span><span class="o">(),</span>
<span class="n">numCorrections</span><span class="o">,</span>
<span class="n">convergenceTol</span><span class="o">,</span>
<span class="n">maxNumIterations</span><span class="o">,</span>
<span class="n">regParam</span><span class="o">,</span>
<span class="n">initialWeightsWithIntercept</span><span class="o">)</span>
<span class="k">val</span> <span class="nv">model</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LogisticRegressionModel</span><span class="o">(</span>
<span class="nv">Vectors</span><span class="o">.</span><span class="py">dense</span><span class="o">(</span><span class="nv">weightsWithIntercept</span><span class="o">.</span><span class="py">toArray</span><span class="o">.</span><span class="py">slice</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="nv">weightsWithIntercept</span><span class="o">.</span><span class="py">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">)),</span>
<span class="nf">weightsWithIntercept</span><span class="o">(</span><span class="nv">weightsWithIntercept</span><span class="o">.</span><span class="py">size</span> <span class="o">-</span> <span class="mi">1</span><span class="o">))</span>
<span class="c1">// Clear the default threshold.</span>
<span class="nv">model</span><span class="o">.</span><span class="py">clearThreshold</span><span class="o">()</span>
<span class="c1">// Compute raw scores on the test set.</span>
<span class="k">val</span> <span class="nv">scoreAndLabels</span> <span class="k">=</span> <span class="nv">test</span><span class="o">.</span><span class="py">map</span> <span class="o">{</span> <span class="n">point</span> <span class="k">=&gt;</span>
<span class="k">val</span> <span class="nv">score</span> <span class="k">=</span> <span class="nv">model</span><span class="o">.</span><span class="py">predict</span><span class="o">(</span><span class="nv">point</span><span class="o">.</span><span class="py">features</span><span class="o">)</span>
<span class="o">(</span><span class="n">score</span><span class="o">,</span> <span class="nv">point</span><span class="o">.</span><span class="py">label</span><span class="o">)</span>
<span class="o">}</span>
<span class="c1">// Get evaluation metrics.</span>
<span class="k">val</span> <span class="nv">metrics</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">BinaryClassificationMetrics</span><span class="o">(</span><span class="n">scoreAndLabels</span><span class="o">)</span>
<span class="k">val</span> <span class="nv">auROC</span> <span class="k">=</span> <span class="nv">metrics</span><span class="o">.</span><span class="py">areaUnderROC</span><span class="o">()</span>
<span class="nf">println</span><span class="o">(</span><span class="s">"Loss of each step in training process"</span><span class="o">)</span>
<span class="nv">loss</span><span class="o">.</span><span class="py">foreach</span><span class="o">(</span><span class="n">println</span><span class="o">)</span>
<span class="nf">println</span><span class="o">(</span><span class="n">s</span><span class="s">"Area under ROC = $auROC"</span><span class="o">)</span></code></pre></div>
<div><small>Find full example code at "examples/src/main/scala/org/apache/spark/examples/mllib/LBFGSExample.scala" in the Spark repo.</small></div>
</div>
<div data-lang="java">
<p>Refer to the <a href="api/java/org/apache/spark/mllib/optimization/LBFGS.html"><code class="highlighter-rouge">LBFGS</code> Java docs</a> and <a href="api/java/org/apache/spark/mllib/optimization/SquaredL2Updater.html"><code class="highlighter-rouge">SquaredL2Updater</code> Java docs</a> for details on the API.</p>
<div class="highlight"><pre class="codehilite"><code><span class="kn">import</span> <span class="nn">java.util.Arrays</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">scala.Tuple2</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.api.java.*</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.classification.LogisticRegressionModel</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.evaluation.BinaryClassificationMetrics</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vector</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.linalg.Vectors</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.optimization.*</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.regression.LabeledPoint</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.mllib.util.MLUtils</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.SparkConf</span><span class="o">;</span>
<span class="kn">import</span> <span class="nn">org.apache.spark.SparkContext</span><span class="o">;</span>
<span class="nc">String</span> <span class="n">path</span> <span class="o">=</span> <span class="s">"data/mllib/sample_libsvm_data.txt"</span><span class="o">;</span>
<span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">LabeledPoint</span><span class="o">&gt;</span> <span class="n">data</span> <span class="o">=</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="na">loadLibSVMFile</span><span class="o">(</span><span class="n">sc</span><span class="o">,</span> <span class="n">path</span><span class="o">).</span><span class="na">toJavaRDD</span><span class="o">();</span>
<span class="kt">int</span> <span class="n">numFeatures</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">take</span><span class="o">(</span><span class="mi">1</span><span class="o">).</span><span class="na">get</span><span class="o">(</span><span class="mi">0</span><span class="o">).</span><span class="na">features</span><span class="o">().</span><span class="na">size</span><span class="o">();</span>
<span class="c1">// Split initial RDD into two... [60% training data, 40% testing data].</span>
<span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">LabeledPoint</span><span class="o">&gt;</span> <span class="n">trainingInit</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">sample</span><span class="o">(</span><span class="kc">false</span><span class="o">,</span> <span class="mf">0.6</span><span class="o">,</span> <span class="mi">11L</span><span class="o">);</span>
<span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">LabeledPoint</span><span class="o">&gt;</span> <span class="n">test</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">subtract</span><span class="o">(</span><span class="n">trainingInit</span><span class="o">);</span>
<span class="c1">// Append 1 into the training data as intercept.</span>
<span class="nc">JavaPairRDD</span><span class="o">&lt;</span><span class="nc">Object</span><span class="o">,</span> <span class="nc">Vector</span><span class="o">&gt;</span> <span class="n">training</span> <span class="o">=</span> <span class="n">data</span><span class="o">.</span><span class="na">mapToPair</span><span class="o">(</span><span class="n">p</span> <span class="o">-&gt;</span>
<span class="k">new</span> <span class="nc">Tuple2</span><span class="o">&lt;&gt;(</span><span class="n">p</span><span class="o">.</span><span class="na">label</span><span class="o">(),</span> <span class="nc">MLUtils</span><span class="o">.</span><span class="na">appendBias</span><span class="o">(</span><span class="n">p</span><span class="o">.</span><span class="na">features</span><span class="o">())));</span>
<span class="n">training</span><span class="o">.</span><span class="na">cache</span><span class="o">();</span>
<span class="c1">// Run training algorithm to build the model.</span>
<span class="kt">int</span> <span class="n">numCorrections</span> <span class="o">=</span> <span class="mi">10</span><span class="o">;</span>
<span class="kt">double</span> <span class="n">convergenceTol</span> <span class="o">=</span> <span class="mi">1</span><span class="n">e</span><span class="o">-</span><span class="mi">4</span><span class="o">;</span>
<span class="kt">int</span> <span class="n">maxNumIterations</span> <span class="o">=</span> <span class="mi">20</span><span class="o">;</span>
<span class="kt">double</span> <span class="n">regParam</span> <span class="o">=</span> <span class="mf">0.1</span><span class="o">;</span>
<span class="nc">Vector</span> <span class="n">initialWeightsWithIntercept</span> <span class="o">=</span> <span class="nc">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="k">new</span> <span class="kt">double</span><span class="o">[</span><span class="n">numFeatures</span> <span class="o">+</span> <span class="mi">1</span><span class="o">]);</span>
<span class="nc">Tuple2</span><span class="o">&lt;</span><span class="nc">Vector</span><span class="o">,</span> <span class="kt">double</span><span class="o">[]&gt;</span> <span class="n">result</span> <span class="o">=</span> <span class="no">LBFGS</span><span class="o">.</span><span class="na">runLBFGS</span><span class="o">(</span>
<span class="n">training</span><span class="o">.</span><span class="na">rdd</span><span class="o">(),</span>
<span class="k">new</span> <span class="nf">LogisticGradient</span><span class="o">(),</span>
<span class="k">new</span> <span class="nf">SquaredL2Updater</span><span class="o">(),</span>
<span class="n">numCorrections</span><span class="o">,</span>
<span class="n">convergenceTol</span><span class="o">,</span>
<span class="n">maxNumIterations</span><span class="o">,</span>
<span class="n">regParam</span><span class="o">,</span>
<span class="n">initialWeightsWithIntercept</span><span class="o">);</span>
<span class="nc">Vector</span> <span class="n">weightsWithIntercept</span> <span class="o">=</span> <span class="n">result</span><span class="o">.</span><span class="na">_1</span><span class="o">();</span>
<span class="kt">double</span><span class="o">[]</span> <span class="n">loss</span> <span class="o">=</span> <span class="n">result</span><span class="o">.</span><span class="na">_2</span><span class="o">();</span>
<span class="nc">LogisticRegressionModel</span> <span class="n">model</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">LogisticRegressionModel</span><span class="o">(</span>
<span class="nc">Vectors</span><span class="o">.</span><span class="na">dense</span><span class="o">(</span><span class="nc">Arrays</span><span class="o">.</span><span class="na">copyOf</span><span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">toArray</span><span class="o">(),</span> <span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">size</span><span class="o">()</span> <span class="o">-</span> <span class="mi">1</span><span class="o">)),</span>
<span class="o">(</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">toArray</span><span class="o">())[</span><span class="n">weightsWithIntercept</span><span class="o">.</span><span class="na">size</span><span class="o">()</span> <span class="o">-</span> <span class="mi">1</span><span class="o">]);</span>
<span class="c1">// Clear the default threshold.</span>
<span class="n">model</span><span class="o">.</span><span class="na">clearThreshold</span><span class="o">();</span>
<span class="c1">// Compute raw scores on the test set.</span>
<span class="nc">JavaPairRDD</span><span class="o">&lt;</span><span class="nc">Object</span><span class="o">,</span> <span class="nc">Object</span><span class="o">&gt;</span> <span class="n">scoreAndLabels</span> <span class="o">=</span> <span class="n">test</span><span class="o">.</span><span class="na">mapToPair</span><span class="o">(</span><span class="n">p</span> <span class="o">-&gt;</span>
<span class="k">new</span> <span class="nc">Tuple2</span><span class="o">&lt;&gt;(</span><span class="n">model</span><span class="o">.</span><span class="na">predict</span><span class="o">(</span><span class="n">p</span><span class="o">.</span><span class="na">features</span><span class="o">()),</span> <span class="n">p</span><span class="o">.</span><span class="na">label</span><span class="o">()));</span>
<span class="c1">// Get evaluation metrics.</span>
<span class="nc">BinaryClassificationMetrics</span> <span class="n">metrics</span> <span class="o">=</span>
<span class="k">new</span> <span class="nf">BinaryClassificationMetrics</span><span class="o">(</span><span class="n">scoreAndLabels</span><span class="o">.</span><span class="na">rdd</span><span class="o">());</span>
<span class="kt">double</span> <span class="n">auROC</span> <span class="o">=</span> <span class="n">metrics</span><span class="o">.</span><span class="na">areaUnderROC</span><span class="o">();</span>
<span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Loss of each step in training process"</span><span class="o">);</span>
<span class="k">for</span> <span class="o">(</span><span class="kt">double</span> <span class="n">l</span> <span class="o">:</span> <span class="n">loss</span><span class="o">)</span> <span class="o">{</span>
<span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">l</span><span class="o">);</span>
<span class="o">}</span>
<span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Area under ROC = "</span> <span class="o">+</span> <span class="n">auROC</span><span class="o">);</span></code></pre></div>
<div><small>Find full example code at "examples/src/main/java/org/apache/spark/examples/mllib/JavaLBFGSExample.java" in the Spark repo.</small></div>
</div>
</div>
<h2 id="developers-notes">Developer&#8217;s notes</h2>
<p>Since the Hessian is constructed approximately from previous gradient evaluations,
the objective function can not be changed during the optimization process.
As a result, Stochastic L-BFGS will not work naively by just using miniBatch;
therefore, we don&#8217;t provide this until we have better understanding.</p>
<p><code class="highlighter-rouge">Updater</code> is a class originally designed for gradient decent which computes
the actual gradient descent step. However, we&#8217;re able to take the gradient and
loss of objective function of regularization for L-BFGS by ignoring the part of logic
only for gradient decent such as adaptive step size stuff. We will refactorize
this into regularizer to replace updater to separate the logic between
regularization and step update later.</p>
</div>
<!-- /container -->
</div>
<script src="js/vendor/jquery-3.4.1.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="js/vendor/anchor.min.js"></script>
<script src="js/main.js"></script>
<!-- MathJax Section -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
</script>
<script>
// Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS.
// We could use "//cdn.mathjax...", but that won't support "file://".
(function(d, script) {
script = d.createElement('script');
script.type = 'text/javascript';
script.async = true;
script.onload = function(){
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
processEscapes: true,
skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
}
});
};
script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') +
'cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js' +
'?config=TeX-AMS-MML_HTMLorMML';
d.getElementsByTagName('head')[0].appendChild(script);
}(document));
</script>
</body>
</html>