MLlib is Spark's machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. At a high level, it provides tools such as:
The MLlib RDD-based API is now in maintenance mode.
As of Spark 2.0, the RDD-based APIs in the spark.mllib
package have entered maintenance mode. The primary Machine Learning API for Spark is now the DataFrame-based API in the spark.ml
package.
What are the implications?
spark.mllib
with bug fixes.Why is MLlib switching to the DataFrame-based API?
MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If native libraries[^1] are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.
Due to licensing issues with runtime proprietary binaries, we do not include netlib-java
‘s native proxies by default. To configure netlib-java
/ Breeze to use system optimised binaries, include com.github.fommil.netlib:all:1.1.2
(or build Spark with -Pnetlib-lgpl
) as a dependency of your project and read the netlib-java documentation for your platform’s additional installation instructions.
To use MLlib in Python, you will need NumPy version 1.4 or newer.
[^1]: To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday's ScalaX talk on High Performance Linear Algebra in Scala.
MLlib is under active development. The APIs marked Experimental
/DeveloperApi
may change in future releases, and the migration guide below will explain all changes between releases.
There were several breaking changes in Spark 2.0, which are outlined below.
Linear algebra classes for DataFrame-based APIs
Spark's linear algebra dependencies were moved to a new project, mllib-local
(see SPARK-13944). As part of this change, the linear algebra classes were copied to a new package, spark.ml.linalg
. The DataFrame-based APIs in spark.ml
now depend on the spark.ml.linalg
classes, leading to a few breaking changes, predominantly in various model classes (see SPARK-14810 for a full list).
Note: the RDD-based APIs in spark.mllib
continue to depend on the previous package spark.mllib.linalg
.
Converting vectors and matrices
While most pipeline components support backward compatibility for loading, some existing DataFrames
and pipelines in Spark versions prior to 2.0, that contain vector or matrix columns, may need to be migrated to the new spark.ml
vector and matrix types. Utilities for converting DataFrame
columns from spark.mllib.linalg
to spark.ml.linalg
types (and vice versa) can be found in spark.mllib.util.MLUtils
.
There are also utility methods available for converting single instances of vectors and matrices. Use the asML
method on a mllib.linalg.Vector
/ mllib.linalg.Matrix
for converting to ml.linalg
types, and mllib.linalg.Vectors.fromML
/ mllib.linalg.Matrices.fromML
for converting to mllib.linalg
types.
{% highlight scala %} import org.apache.spark.mllib.util.MLUtils
// convert DataFrame columns val convertedVecDF = MLUtils.convertVectorColumnsToML(vecDF) val convertedMatrixDF = MLUtils.convertMatrixColumnsToML(matrixDF) // convert a single vector or matrix val mlVec: org.apache.spark.ml.linalg.Vector = mllibVec.asML val mlMat: org.apache.spark.ml.linalg.Matrix = mllibMat.asML {% endhighlight %}
Refer to the MLUtils
Scala docs for further detail.
{% highlight java %} import org.apache.spark.mllib.util.MLUtils; import org.apache.spark.sql.Dataset;
// convert DataFrame columns Dataset convertedVecDF = MLUtils.convertVectorColumnsToML(vecDF); Dataset convertedMatrixDF = MLUtils.convertMatrixColumnsToML(matrixDF); // convert a single vector or matrix org.apache.spark.ml.linalg.Vector mlVec = mllibVec.asML(); org.apache.spark.ml.linalg.Matrix mlMat = mllibMat.asML(); {% endhighlight %}
Refer to the MLUtils
Java docs for further detail.
{% highlight python %} from pyspark.mllib.util import MLUtils
convertedVecDF = MLUtils.convertVectorColumnsToML(vecDF) convertedMatrixDF = MLUtils.convertMatrixColumnsToML(matrixDF)
mlVec = mllibVec.asML() mlMat = mllibMat.asML() {% endhighlight %}
Refer to the MLUtils
Python docs for further detail.
Deprecated methods removed
Several deprecated methods were removed in the spark.mllib
and spark.ml
packages:
setScoreCol
in ml.evaluation.BinaryClassificationEvaluator
weights
in LinearRegression
and LogisticRegression
in spark.ml
setMaxNumIterations
in mllib.optimization.LBFGS
(marked as DeveloperApi
)treeReduce
and treeAggregate
in mllib.rdd.RDDFunctions
(these functions are available on RDD
s directly, and were marked as DeveloperApi
)defaultStategy
in mllib.tree.configuration.Strategy
build
in mllib.tree.Node
mllib.util.MLUtils
A full list of breaking changes can be found at SPARK-14810.
Deprecations
Deprecations in the spark.mllib
and spark.ml
packages include:
spark.ml.regression.LinearRegressionSummary
, the model
field has been deprecated.spark.ml.regression.RandomForestRegressionModel
and spark.ml.classification.RandomForestClassificationModel
, the numTrees
parameter has been deprecated in favor of getNumTrees
method.spark.ml.param.Params
, the validateParams
method has been deprecated. We move all functionality in overridden methods to the corresponding transformSchema
.spark.mllib
package, LinearRegressionWithSGD
, LassoWithSGD
, RidgeRegressionWithSGD
and LogisticRegressionWithSGD
have been deprecated. We encourage users to use spark.ml.regression.LinearRegresson
and spark.ml.classification.LogisticRegresson
.spark.mllib.evaluation.MulticlassMetrics
, the parameters precision
, recall
and fMeasure
have been deprecated in favor of accuracy
.spark.ml.util.MLReader
and spark.ml.util.MLWriter
, the context
method has been deprecated in favor of session
.spark.ml.feature.ChiSqSelectorModel
, the setLabelCol
method has been deprecated since it was not used by ChiSqSelectorModel
.Changes of behavior
Changes of behavior in the spark.mllib
and spark.ml
packages include:
spark.mllib.classification.LogisticRegressionWithLBFGS
directly calls spark.ml.classification.LogisticRegresson
for binary classification now. This will introduce the following behavior changes for spark.mllib.classification.LogisticRegressionWithLBFGS
:spark.ml.classification.LogisticRegresson
, the default value of spark.mllib.classification.LogisticRegressionWithLBFGS
: convergenceTol
has been changed from 1E-4 to 1E-6.PowerIterationClustering
which will likely change its result.LDA
using the EM
optimizer will keep the last checkpoint by default, if checkpointing is being used.Word2Vec
now respects sentence boundaries. Previously, it did not handle them correctly.HashingTF
uses MurmurHash3
as default hash algorithm in both spark.ml
and spark.mllib
.expectedType
argument for PySpark Param
was removed.Param
values, which were mismatched between pipelines in Scala and Python, have been changed.QuantileDiscretizer
now uses spark.sql.DataFrameStatFunctions.approxQuantile
to find splits (previously used custom sampling logic). The output buckets will differ for same input data and params.Earlier migration guides are archived on this page.