blob: fb38652c3666cc9926b8587e69f14092868cc44b [file] [log] [blame]
<!-- HTML header for doxygen 1.8.4-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.4"/>
<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
<title>MADlib: k-Means Clustering</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
$(document).ready(function() { searchBox.OnSelectItem(0); });
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
jax: ["input/TeX","output/HTML-CSS"],
});
</script><script src="../mathjax/MathJax.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
<!-- google analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-45382226-1', 'auto');
ga('send', 'pageview');
</script>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td style="padding-left: 0.5em;">
<div id="projectname">MADlib
&#160;<span id="projectnumber">1.4.1</span>
</div>
<div id="projectbrief">User Documentation</div>
</td>
<td> <div id="MSearchBox" class="MSearchBoxInactive">
<span class="left">
<img id="MSearchSelect" src="search/mag_sel.png"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
alt=""/>
<input type="text" id="MSearchField" value="Search" accesskey="S"
onfocus="searchBox.OnSearchFieldFocus(true)"
onblur="searchBox.OnSearchFieldFocus(false)"
onkeyup="searchBox.OnSearchFieldChange(event)"/>
</span><span class="right">
<a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.4 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('group__grp__kmeans.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
<a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(0)"><span class="SelectionMark">&#160;</span>All</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(1)"><span class="SelectionMark">&#160;</span>Files</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(2)"><span class="SelectionMark">&#160;</span>Functions</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(3)"><span class="SelectionMark">&#160;</span>Variables</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(4)"><span class="SelectionMark">&#160;</span>Groups</a></div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="header">
<div class="headertitle">
<div class="title">k-Means Clustering<div class="ingroups"><a class="el" href="group__grp__clustering.html">Clustering</a></div></div> </div>
</div><!--header-->
<div class="contents">
<div class="toc"><b>Contents</b> </p>
<ul>
<li class="level1">
<a href="#train">Training Function</a> </li>
<li class="level1">
<a href="#output">Output Format</a> </li>
<li class="level1">
<a href="#examples">Examples</a> </li>
<li class="level1">
<a href="#notes">Notes</a> </li>
<li class="level1">
<a href="#background">Technical Background</a> </li>
<li class="level1">
<a href="#literature">Literature</a> </li>
<li class="level1">
<a href="#related">Related Topics</a> </li>
</ul>
</div><p>Clustering refers to the problem of partitioning a set of objects according to some problem-dependent measure of <em>similarity</em>. In the k-means variant, given \( n \) points \( x_1, \dots, x_n \in \mathbb R^d \), the goal is to position \( k \) centroids \( c_1, \dots, c_k \in \mathbb R^d \) so that the sum of <em>distances</em> between each point and its closest centroid is minimized. Each centroid represents a cluster that consists of all points to which this centroid is closest.</p>
<p><a class="anchor" id="train"></a></p>
<dl class="section user"><dt>Training Function</dt><dd></dd></dl>
<p>The k-means algorithm can be invoked in four ways, depending on the source of the initial set of centroids:</p>
<ul>
<li>Use the random centroid seeding method. <pre class="syntax">
kmeans_random( rel_source,
expr_point,
k,
fn_dist,
agg_centroid,
max_num_iterations,
min_frac_reassigned
)
</pre></li>
<li>Use the kmeans++ centroid seeding method. <pre class="syntax">
kmeanspp( rel_source,
expr_point,
k,
fn_dist,
agg_centroid,
max_num_iterations,
min_frac_reassigned
)
</pre></li>
<li>Supply an initial centroid set in a relation identified by the <em>rel_initial_centroids</em> argument. <pre class="syntax">
kmeans( rel_source,
expr_point,
rel_initial_centroids,
expr_centroid,
fn_dist,
agg_centroid,
max_num_iterations,
min_frac_reassigned
)
</pre></li>
<li>Provide an initial centroid set as an array expression in the <em>initial_centroids</em> argument. <pre class="syntax">
kmeans( rel_source,
expr_point,
initial_centroids,
fn_dist,
agg_centroid,
max_num_iterations,
min_frac_reassigned
)
</pre> <b>Arguments</b> <dl class="arglist">
<dt>rel_source </dt>
<dd><p class="startdd">TEXT. The name of the table containing the input data points.</p>
<p>Data points and predefined centroids (if used) are expected to be stored row-wise, in a column of type <code><a class="el" href="group__grp__svec.html">SVEC</a></code> (or any type convertible to <code><a class="el" href="group__grp__svec.html">SVEC</a></code>, like <code>FLOAT[]</code> or <code>INTEGER[]</code>). Data points with non-finite values (NULL, NaN, infinity) in any component are skipped during analysis. </p>
<p class="enddd"></p>
</dd>
<dt>expr_point </dt>
<dd><p class="startdd">TEXT. The name of the column with point coordinates.</p>
<p class="enddd"></p>
</dd>
<dt>k </dt>
<dd><p class="startdd">INTEGER. The number of centroids to calculate.</p>
<p class="enddd"></p>
</dd>
<dt>fn_dist (optional) </dt>
<dd><p class="startdd">TEXT, default: squared_dist_norm2'. The name of the function to use to calculate the distance from a data point to a centroid.</p>
<p>The following distance functions can be used (computation of barycenter/mean in parentheses): </p>
<ul>
<li>
<b><a class="el" href="linalg_8sql__in.html#aad193850e79c4b9d811ca9bc53e13476">dist_norm1</a></b>: 1-norm/Manhattan (element-wise median [Note that MADlib does not provide a median aggregate function for support and performance reasons.]) </li>
<li>
<b><a class="el" href="linalg_8sql__in.html#aa58e51526edea6ea98db30b6f250adb4">dist_norm2</a></b>: 2-norm/Euclidean (element-wise mean) </li>
<li>
<b><a class="el" href="linalg_8sql__in.html#a00a08e69f27524f2096032214e15b668">squared_dist_norm2</a></b>: squared Euclidean distance (element-wise mean) </li>
<li>
<b><a class="el" href="linalg_8sql__in.html#a8c7b9281a72ff22caf06161701b27e84">dist_angle</a></b>: angle (element-wise mean of normalized points) </li>
<li>
<b><a class="el" href="linalg_8sql__in.html#afa13b4c6122b99422d666dedea136c18">dist_tanimoto</a></b>: tanimoto (element-wise mean of normalized points <a href="#kmeans-lit-5">[5]</a>) </li>
<li>
<b>user defined function</b> with signature <code>DOUBLE PRECISION[] x, DOUBLE PRECISION[] y -&gt; DOUBLE PRECISION</code></li>
</ul>
<p class="enddd"></p>
</dd>
<dt>agg_centroid (optional) </dt>
<dd><p class="startdd">TEXT, default: 'avg'. The name of the aggregate function used to determine centroids.</p>
<p>The following aggregate functions can be used:</p>
<ul>
<li>
<b><a class="el" href="linalg_8sql__in.html#a1aa37f73fb1cd8d7d106aa518dd8c0b4">avg</a></b>: average (Default) </li>
<li>
<b><a class="el" href="linalg_8sql__in.html#a0b04663ca206f03e66aed5ea2b4cc461">normalized_avg</a></b>: normalized average</li>
</ul>
<p class="enddd"></p>
</dd>
<dt>max_num_iterations (optional) </dt>
<dd><p class="startdd">INTEGER, default: 20. The maximum number of iterations to perform.</p>
<p class="enddd"></p>
</dd>
<dt>min_frac_reassigned (optional) </dt>
<dd><p class="startdd">DOUBLE PRECISION, default: 0.001. The minimum fraction of centroids reassigned to continue iterating. When fewer than this fraction of centroids are reassigned in an iteration, the calculation completes.</p>
<p class="enddd"></p>
</dd>
<dt>rel_initial_centroids </dt>
<dd><p class="startdd">TEXT. The set of initial centroids. The centroid relation is expected to be of the following form: </p>
<pre>
{TABLE|VIEW} rel_initial_centroids (
...
expr_centroid DOUBLE PRECISION[],
...
)
</pre><p> where <em>expr_centroid</em> is the name of a column with coordinates. </p>
<p class="enddd"></p>
</dd>
<dt>expr_centroid </dt>
<dd><p class="startdd">TEXT. The name of the column in the <em>rel_initial_centroids</em> relation that contains the centroid coordinates.</p>
<p class="enddd"></p>
</dd>
<dt>initial_centroids </dt>
<dd>TEXT. A string containing a DOUBLE PRECISION array expression with the initial centroid coordinates. </dd>
</dl>
</li>
</ul>
<p><a class="anchor" id="output"></a></p>
<dl class="section user"><dt>Output Format</dt><dd></dd></dl>
<p>The output of the k-means module is a composite type with the following columns: </p>
<table class="output">
<tr>
<th>centroids </th><td>DOUBLE PRECISION[][]. The final centroid positions. </td></tr>
<tr>
<th>objective_fn </th><td>DOUBLE PRECISON. The value of the objective function. </td></tr>
<tr>
<th>frac_reassigned </th><td>DOUBLE PRECISION. The fraction of points reassigned in the last iteration. </td></tr>
<tr>
<th>num_iterations </th><td>INTEGER. The total number of iterations executed. </td></tr>
</table>
<p><a class="anchor" id="examples"></a></p>
<dl class="section user"><dt>Examples</dt><dd><ol type="1">
<li>Prepare some input data. <pre class="example">
SELECT * FROM public.km_sample LIMIT 5;
</pre> Result: <pre class="result">
points
&#160;------------------------------------------
{1,1}:{15.8822241332382,105.945462542586}
{1,1}:{34.5065216883086,72.3126099305227}
{1,1}:{22.5074400822632,95.3209559689276}
{1,1}:{70.2589857042767,68.7395178806037}
{1,1}:{30.9844257542863,25.3213323024102}
(5 rows)
</pre> Note: the example <em>points</em> is type <code><a class="el" href="group__grp__svec.html">SVEC</a></code>.</li>
<li>Run k-means clustering using kmeans++ for centroid seeding: <pre class="example">
SELECT * FROM madlib.kmeanspp( 'km_sample',
'points',
2,
'madlib.squared_dist_norm2',
'madlib.avg',
20,
0.001
);
</pre> Result: <pre class="result">
centroids | objective_fn | frac_reassigned | num_iterations
&#160;------------------------------------------------------------------------+------------------+-----------------+----------------
{{68.01668579784,48.9667382972952},{28.1452167573446,84.5992507653263}} | 586729.010675982 | 0.001 | 5
</pre></li>
<li>Calculate the simplified silhouette coefficient: <pre class="example">
SELECT * FROM madlib.simple_silhouette( 'km_test_svec',
'points',
(SELECT centroids FROM
madlib.kmeanspp( 'km_test_svec',
'points',
2,
'madlib.squared_dist_norm2',
'madlib.avg',
20,
0.001)),
'madlib.dist_norm2'
);
</pre> Result: <pre class="result">
simple_silhouette
&#160;------------------
0.611022970398174
</pre></li>
</ol>
</dd></dl>
<p><a class="anchor" id="notes"></a></p>
<dl class="section user"><dt>Notes</dt><dd></dd></dl>
<p>The algorithm stops when one of the following conditions is met:</p>
<ul>
<li>The fraction of updated points is smaller than the convergence threshold (<em>min_frac_reassigned</em> argument). (Default: 0.001).</li>
<li>The algorithm reaches the maximum number of allowed iterations (<em>max_num_iterations</em> argument). (Default: 20).</li>
</ul>
<p>A popular method to assess the quality of the clustering is the <em>silhouette coefficient</em>, a simplified version of which is provided as part of the k-means module. Note that for large data sets, this computation is expensive.</p>
<p>The silhouette function has the following syntax: </p>
<pre class="syntax">
simple_silhouette( rel_source,
expr_point,
centroids,
fn_dist
)
</pre><p> <b>Arguments</b> </p>
<dl class="arglist">
<dt>rel_source </dt>
<dd>TEXT. The name of the relation containing the input point. </dd>
<dt>expr_point </dt>
<dd>TEXT. An expression evaluating to point coordinates for each row in the relation. </dd>
<dt>centroids </dt>
<dd>TEXT. An expression evaluating to an array of centroids. </dd>
<dt>fn_dist (optional) </dt>
<dd>TEXT, default 'dist_norm2', The name of a function to calculate the distance of a point from a centroid. See the <em>fn_dist</em> argument of the k-means training function. </dd>
</dl>
<p><a class="anchor" id="background"></a></p>
<dl class="section user"><dt>Technical Background</dt><dd></dd></dl>
<p>Formally, we wish to minimize the following objective function: </p>
<p class="formulaDsp">
\[ (c_1, \dots, c_k) \mapsto \sum_{i=1}^n \min_{j=1}^k \operatorname{dist}(x_i, c_j) \]
</p>
<p> In the most common case, \( \operatorname{dist} \) is the square of the Euclidean distance.</p>
<p>This problem is computationally difficult (NP-hard), yet the local-search heuristic proposed by Lloyd [4] performs reasonably well in practice. In fact, it is so ubiquitous today that it is often referred to as the <em>standard algorithm</em> or even just the <em>k-means algorithm</em> [1]. It works as follows:</p>
<ol type="1">
<li>Seed the \( k \) centroids (see below)</li>
<li>Repeat until convergence:<ol type="a">
<li>Assign each point to its closest centroid</li>
<li>Move each centroid to a position that minimizes the sum of distances in this cluster</li>
</ol>
</li>
<li>Convergence is achieved when no points change their assignments during step 2a.</li>
</ol>
<p>Since the objective function decreases in every step, this algorithm is guaranteed to converge to a local optimum.</p>
<p><a class="anchor" id="literature"></a></p>
<dl class="section user"><dt>Literature</dt><dd></dd></dl>
<p><a class="anchor" id="kmeans-lit-1"></a>[1] Wikipedia, K-means Clustering, <a href="http://en.wikipedia.org/wiki/K-means_clustering">http://en.wikipedia.org/wiki/K-means_clustering</a></p>
<p><a class="anchor" id="kmeans-lit-2"></a>[2] David Arthur, Sergei Vassilvitskii: k-means++: the advantages of careful seeding, Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'07), pp. 1027-1035, <a href="http://www.stanford.edu/~darthur/kMeansPlusPlus.pdf">http://www.stanford.edu/~darthur/kMeansPlusPlus.pdf</a></p>
<p><a class="anchor" id="kmeans-lit-3"></a>[3] E. R. Hruschka, L. N. C. Silva, R. J. G. B. Campello: Clustering Gene-Expression Data: A Hybrid Approach that Iterates Between k-Means and Evolutionary Search. In: Studies in Computational Intelligence - Hybrid Evolutionary Algorithms. pp. 313-335. Springer. 2007.</p>
<p><a class="anchor" id="kmeans-lit-4"></a>[4] Lloyd, Stuart: Least squares quantization in PCM. Technical Note, Bell Laboratories. Published much later in: IEEE Transactions on Information Theory 28(2), pp. 128-137. 1982.</p>
<p><a class="anchor" id="kmeans-lit-5"></a>[5] Leisch, Friedrich: A Toolbox for K-Centroids Cluster Analysis. In: Computational Statistics and Data Analysis, 51(2). pp. 526-544. 2006.</p>
<p><a class="anchor" id="related"></a></p>
<dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
<p>File <a class="el" href="kmeans_8sql__in.html" title="Set of functions for k-means clustering. ">kmeans.sql_in</a> documenting the k-Means SQL functions</p>
<p><a class="el" href="group__grp__svec.html">Sparse Vectors</a></p>
<p><a class="el" href="kmeans_8sql__in.html#a71e7675758c99acbe7785819b6a85a8f">simple_silhouette()</a> </p>
</div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="footer">Generated on Thu Jan 9 2014 20:25:07 for MADlib by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.4 </li>
</ul>
</div>
</body>
</html>