blob: 49f74936d4936f6b1d562370f7d3c14f745e4603 [file] [log] [blame]
<!-- HTML header for doxygen 1.8.4-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.13"/>
<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
<title>MADlib: Sparse Vectors</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
$(document).ready(function() { init_search(); });
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"],
jax: ["input/TeX","output/HTML-CSS"],
});
</script><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"></script>
<!-- hack in the navigation tree -->
<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
<!-- google analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-45382226-1', 'madlib.apache.org');
ga('send', 'pageview');
</script>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td id="projectlogo"><a href="http://madlib.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
<td style="padding-left: 0.5em;">
<div id="projectname">
<span id="projectnumber">1.18.0</span>
</div>
<div id="projectbrief">User Documentation for Apache MADlib</div>
</td>
<td> <div id="MSearchBox" class="MSearchBoxInactive">
<span class="left">
<img id="MSearchSelect" src="search/mag_sel.png"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
alt=""/>
<input type="text" id="MSearchField" value="Search" accesskey="S"
onfocus="searchBox.OnSearchFieldFocus(true)"
onblur="searchBox.OnSearchFieldFocus(false)"
onkeyup="searchBox.OnSearchFieldChange(event)"/>
</span><span class="right">
<a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('group__grp__svec.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="header">
<div class="headertitle">
<div class="title">Sparse Vectors<div class="ingroups"><a class="el" href="group__grp__datatrans.html">Data Types and Transformations</a> &raquo; <a class="el" href="group__grp__arraysmatrix.html">Arrays and Matrices</a></div></div> </div>
</div><!--header-->
<div class="contents">
<div class="toc"><b>Contents</b> <ul>
<li>
<a href="#usage">Using Sparse Vectors</a> </li>
<li>
<a href="#vectorization">Document Vectorization into Sparse Vectors</a> </li>
<li>
<a href="#examples">Examples</a> </li>
<li>
<a href="#related">Related Topics</a> </li>
</ul>
</div><p>This module implements a sparse vector data type, named "svec", which provides compressed storage of vectors that have many duplicate elements.</p>
<p>Arrays of floating point numbers for various calculations sometimes have long runs of zeros (or some other default value). This is common in applications like scientific computing, retail optimization, and text processing. Each floating point number takes 8 bytes of storage in memory and/or disk, so saving those zeros is often worthwhile. There are also many computations that can benefit from skipping over the zeros.</p>
<p>Consider, for example, the following array of doubles stored as a Postgres/Greenplum "float8[]" data type:</p>
<pre class="example">
'{0, 33,...40,000 zeros..., 12, 22 }'::float8[]
</pre><p>This array would occupy slightly more than 320KB of memory or disk, most of it zeros. Even if we were to exploit the null bitmap and store the zeros as nulls, we would still end up with a 5KB null bitmap, which is still not nearly as memory efficient as we'd like. Also, as we perform various operations on the array, we do work on 40,000 fields that turn out to be unimportant.</p>
<p>To solve the problems associated with the processing of vectors discussed above, the svec type employs a simple Run Length Encoding (RLE) scheme to represent sparse vectors as pairs of count-value arrays. For example, the array above would be represented as</p>
<pre class="example">
'{1,1,40000,1,1}:{0,33,0,12,22}'::madlib.svec
</pre><p>which says there is 1 occurrence of 0, followed by 1 occurrence of 33, followed by 40,000 occurrences of 0, etc. This uses just 5 integers and 5 floating point numbers to store the array. Further, it is easy to implement vector operations that can take advantage of the RLE representation to make computations faster. The SVEC module provides a library of such functions.</p>
<p>The current version only supports sparse vectors of float8 values. Future versions will support other base types.</p>
<p><a class="anchor" id="usage"></a></p><dl class="section user"><dt>Using Sparse Vectors</dt><dd></dd></dl>
<p>An SVEC can be constructed directly with a constant expression, as follows: </p><pre class="example">
SELECT '{n1,n2,...,nk}:{v1,v2,...vk}'::madlib.svec;
</pre><p> where <code>n1,n2,...,nk</code> specifies the counts for the values <code>v1,v2,...,vk</code>.</p>
<p>A float array can be cast to an SVEC: </p><pre class="example">
SELECT ('{v1,v2,...vk}'::float[])::madlib.svec;
</pre><p>An SVEC can be created with an aggregation: </p><pre class="example">
SELECT madlib.svec_agg(v1) FROM generate_series(1,k);
</pre><p>An SVEC can be created using the <code>madlib.svec_cast_positions_float8arr()</code> function by supplying an array of positions and an array of values at those positions: </p><pre class="example">
SELECT madlib.svec_cast_positions_float8arr(
array[n1,n2,...nk], -- positions of values in vector
array[v1,v2,...vk], -- values at each position
length, -- length of vector
base) -- value at unspecified positions
</pre><p> For example, the following expression: </p><pre class="example">
SELECT madlib.svec_cast_positions_float8arr(
array[1,3,5],
array[2,4,6],
10,
0.0)
</pre><p> produces this SVEC: </p><pre class="result">
svec_cast_positions_float8arr
&#160;------------------------------
{1,1,1,1,1,5}:{2,0,4,0,6,0}
</pre><p>Add madlib to the search_path to use the svec operators defined in the module.</p>
<p><a class="anchor" id="vectorization"></a></p><dl class="section user"><dt>Document Vectorization into Sparse Vectors</dt><dd>This module implements an efficient way for document vectorization, converting text documents into sparse vector representation (MADlib.svec), required by various machine learning algorithms in MADlib.</dd></dl>
<p>The function accepts two tables as input, dictionary table and documents table, and produces the specified output table containing sparse vectors for the represented documents (in documents table).</p>
<pre class="syntax">
madlib.gen_doc_svecs(output_tbl,
dictionary_tbl,
dict_id_col,
dict_term_col,
documents_tbl,
doc_id_col,
doc_term_col,
doc_term_info_col
)
</pre><p> <b>Arguments</b> </p><dl class="arglist">
<dt>output_tbl </dt>
<dd><p class="startdd">TEXT. Name of the output table to be created containing the sparse vector representation of the documents. It has the following columns: </p><table class="output">
<tr>
<th>doc_id </th><td>__TYPE_DOC__. Document id. <br />
__TYPE_DOC__: Column type depends on the type of <code>doc_id_col</code> in <code>documents_tbl</code>. </td></tr>
<tr>
<th>sparse_vector </th><td>MADlib.svec. Corresponding sparse vector representation. </td></tr>
</table>
<p class="enddd"></p>
</dd>
<dt>dictionary_tbl </dt>
<dd><p class="startdd">TEXT. Name of the dictionary table containing features. </p><table class="output">
<tr>
<th>dict_id_col </th><td>TEXT. Name of the id column in the <code>dictionary_tbl</code>. <br />
Expected Type: INTEGER or BIGINT. <br />
NOTE: Values must be continuous ranging from 0 to total number of elements in the dictionary - 1. </td></tr>
<tr>
<th>dict_term_col </th><td>TEXT. Name of the column containing term (features) in <code>dictionary_tbl</code>. </td></tr>
</table>
<p class="enddd"></p>
</dd>
<dt>documents_tbl </dt>
<dd>TEXT. Name of the documents table representing documents. <table class="output">
<tr>
<th>doc_id_col </th><td>TEXT. Name of the id column in the <code>documents_tbl</code>. </td></tr>
<tr>
<th>doc_term_col </th><td>TEXT. Name of the term column in the <code>documents_tbl</code>. </td></tr>
<tr>
<th>doc_term_info_col </th><td>TEXT. Name of the term info column in <code>documents_tbl</code>. The expected type of this column should be: <br />
- INTEGER, BIGINT or DOUBLE PRECISION: Values directly used to populate vector. <br />
- ARRAY: Length of the array used to populate the vector. <br />
** For an example use case on using these types of column types, please refer to the example below. </td></tr>
</table>
</dd>
</dl>
<p><b>Example:</b> <br />
Consider a corpus consisting of set of documents consisting of features (terms) along with doc ids: </p><pre class="example">
1, {this,is,one,document,in,the,corpus}
2, {i,am,the,second,document,in,the,corpus}
3, {being,third,never,really,bothered,me,until,now}
4, {the,document,before,me,is,the,third,document}
</pre><ol type="1">
<li>Prepare documents table in appropriate format. <br />
The corpus specified above can be represented by any of the following <code>documents_table:</code> <pre class="example">
SELECT * FROM documents_table ORDER BY id;
</pre> Result: <pre class="result">
id | term | count id | term | positions
&#160;----+----------+------- ----+----------+-----------
1 | is | 1 1 | is | {1}
1 | in | 1 1 | in | {4}
1 | one | 1 1 | one | {2}
1 | this | 1 1 | this | {0}
1 | the | 1 1 | the | {5}
1 | document | 1 1 | document | {3}
1 | corpus | 1 1 | corpus | {6}
2 | second | 1 2 | second | {3}
2 | document | 1 2 | document | {4}
2 | corpus | 1 2 | corpus | {7}
. | ... | .. . | ... | ...
4 | document | 2 4 | document | {1,7}
...
</pre></li>
<li>Prepare dictionary table in appropriate format. <pre class="example">
SELECT * FROM dictionary_table ORDER BY id;
</pre> Result: <pre class="result">
id | term
&#160;----+----------
0 | am
1 | before
2 | being
3 | bothered
4 | corpus
5 | document
6 | i
7 | in
8 | is
9 | me
...
</pre></li>
<li>Generate sparse vector for the documents using dictionary_table and documents_table. <br />
<code>doc_term_info_col</code> <code></code>(count) of type INTEGER: <pre class="example">
SELECT * FROM madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term',
'documents_table', 'id', 'term', 'count');
</pre> <code>doc_term_info_col</code> <code></code>(positions) of type ARRAY: <pre class="example">
SELECT * FROM madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term',
'documents_table', 'id', 'term', 'positions');
</pre> Result: <pre class="result">
gen_doc_svecs
&#160;--------------------------------------------------------------------------------------
Created table svec_output (doc_id, sparse_vector) containing sparse vectors
(1 row)
</pre></li>
<li>Analyze the sparse vectors created. <pre class="example">
SELECT * FROM svec_output ORDER by doc_id;
</pre> Result: <pre class="result">
doc_id | sparse_vector
&#160;--------+-------------------------------------------------
1 | {4,2,1,2,3,1,2,1,1,1,1}:{0,1,0,1,0,1,0,1,0,1,0}
2 | {1,3,4,6,1,1,3}:{1,0,1,0,1,2,0}
3 | {2,2,5,3,1,1,2,1,1,1}:{0,1,0,1,0,1,0,1,0,1}
4 | {1,1,3,1,2,2,5,1,1,2}:{0,1,0,2,0,1,0,2,1,0}
(4 rows)
</pre></li>
</ol>
<p>See the file <a class="el" href="svec_8sql__in.html" title="SQL type definitions and functions for sparse vector data type svec ">svec.sql_in</a> for complete syntax.</p>
<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl>
<p>We can use operations with svec type like &lt;, &gt;, *, **, /, =, +, SUM, etc, and they have meanings associated with typical vector operations. For example, the plus (+) operator adds each of the terms of two vectors having the same dimension together. </p><pre class="example">
SELECT ('{0,1,5}'::float8[]::madlib.svec + '{4,3,2}'::float8[]::madlib.svec)::float8[];
</pre><p> Result: </p><pre class="result">
float8
&#160;--------
{4,4,7}
</pre><p>Without the casting into float8[] at the end, we get: </p><pre class="example">
SELECT '{0,1,5}'::float8[]::madlib.svec + '{4,3,2}'::float8[]::madlib.svec;
</pre><p> Result: </p><pre class="result">
?column?
&#160;---------
{2,1}:{4,7}
</pre><p>A dot product (%*%) between the two vectors will result in a scalar result of type float8. The dot product should be (0*4 + 1*3 + 5*2) = 13, like this: </p><pre class="example">
SELECT '{0,1,5}'::float8[]::madlib.svec %*% '{4,3,2}'::float8[]::madlib.svec;
</pre> <pre class="result">
?column?
&#160;---------
13
</pre><p>Special vector aggregate functions are also available. SUM is self explanatory. SVEC_COUNT_NONZERO evaluates the count of non-zero terms in each column found in a set of n-dimensional svecs and returns an svec with the counts. For instance, if we have the vectors {0,1,5}, {10,0,3},{0,0,3},{0,1,0}, then executing the SVEC_COUNT_NONZERO() aggregate function would result in {1,2,3}:</p>
<pre class="example">
CREATE TABLE list (a madlib.svec);
INSERT INTO list VALUES ('{0,1,5}'::float8[]), ('{10,0,3}'::float8[]), ('{0,0,3}'::float8[]),('{0,1,0}'::float8[]);
SELECT madlib.svec_count_nonzero(a)::float8[] FROM list;
</pre><p> Result: </p><pre class="result">
svec_count_nonzero
&#160;----------------
{1,2,3}
</pre><p>We do not use null bitmaps in the svec data type. A null value in an svec is represented explicitly as an NVP (No Value Present) value. For example, we have: </p><pre class="example">
SELECT '{1,2,3}:{4,null,5}'::madlib.svec;
</pre><p> Result: </p><pre class="result">
svec
&#160;------------------
{1,2,3}:{4,NVP,5}
</pre><p>Adding svecs with null values results in NVPs in the sum: </p><pre class="example">
SELECT '{1,2,3}:{4,null,5}'::madlib.svec + '{2,2,2}:{8,9,10}'::madlib.svec;
</pre><p> Result: </p><pre class="result">
?column?
&#160;-------------------------
{1,2,1,2}:{12,NVP,14,15}
</pre><p>An element of an svec can be accessed using the <a class="el" href="svec__util_8sql__in.html#a8787222aec691f94d9808d1369aa401c">svec_proj()</a> function, which takes an svec and the index of the element desired. </p><pre class="example">
SELECT madlib.svec_proj('{1,2,3}:{4,5,6}'::madlib.svec, 1) + madlib.svec_proj('{4,5,6}:{1,2,3}'::madlib.svec, 15);
</pre><p> Result: </p><pre class="result"> ?column?
&#160;---------
7
</pre><p>A subvector of an svec can be accessed using the <a class="el" href="svec__util_8sql__in.html#a5cb3446de5fc117befe88ccb1ebb0e4e">svec_subvec()</a> function, which takes an svec and the start and end index of the subvector desired. </p><pre class="example">
SELECT madlib.svec_subvec('{2,4,6}:{1,3,5}'::madlib.svec, 2, 11);
</pre><p> Result: </p><pre class="result"> svec_subvec
&#160;----------------
{1,4,5}:{1,3,5}
</pre><p>The elements/subvector of an svec can be changed using the function <a class="el" href="svec__util_8sql__in.html#a59407764a1cbf1937da39cf39a2f447c">svec_change()</a>. It takes three arguments: an m-dimensional svec sv1, a start index j, and an n-dimensional svec sv2 such that j + n - 1 &lt;= m, and returns an svec like sv1 but with the subvector sv1[j:j+n-1] replaced by sv2. An example follows: </p><pre class="example">
SELECT madlib.svec_change('{1,2,3}:{4,5,6}'::madlib.svec,3,'{2}:{3}'::madlib.svec);
</pre><p> Result: </p><pre class="result"> svec_change
&#160;--------------------
{1,1,2,2}:{4,5,3,6}
</pre><p>There are also higher-order functions for processing svecs. For example, the following is the corresponding function for lapply() in R. </p><pre class="example">
SELECT madlib.svec_lapply('sqrt', '{1,2,3}:{4,5,6}'::madlib.svec);
</pre><p> Result: </p><pre class="result">
svec_lapply
&#160;----------------------------------------------
{1,2,3}:{2,2.23606797749979,2.44948974278318}
</pre><p>The full list of functions available for operating on svecs are available in svec.sql-in.</p>
<p><b> A More Extensive Example</b></p>
<p>For a text classification example, let's assume we have a dictionary composed of words in a sorted text array: </p><pre class="example">
CREATE TABLE features (a text[]);
INSERT INTO features VALUES
('{am,before,being,bothered,corpus,document,i,in,is,me,
never,now,one,really,second,the,third,this,until}');
</pre><p> We have a set of documents, each represented as an array of words: </p><pre class="example">
CREATE TABLE documents(a int,b text[]);
INSERT INTO documents VALUES
(1,'{this,is,one,document,in,the,corpus}'),
(2,'{i,am,the,second,document,in,the,corpus}'),
(3,'{being,third,never,really,bothered,me,until,now}'),
(4,'{the,document,before,me,is,the,third,document}');
</pre><p>Now we have a dictionary and some documents, we would like to do some document categorization using vector arithmetic on word counts and proportions of dictionary words in each document.</p>
<p>To start this process, we'll need to find the dictionary words in each document. We'll prepare what is called a Sparse Feature Vector or SFV for each document. An SFV is a vector of dimension N, where N is the number of dictionary words, and in each cell of an SFV is a count of each dictionary word in the document.</p>
<p>Inside the sparse vector library, we have a function that will create an SFV from a document, so we can just do this (For a more efficient way for converting documents into sparse vectors, especially for larger datasets, please refer to <a href="#vectorization">Document Vectorization into Sparse Vectors</a>):</p>
<pre class="example">
SELECT madlib.svec_sfv((SELECT a FROM features LIMIT 1),b)::float8[]
FROM documents;
</pre><p> Result: </p><pre class="result">
svec_sfv
&#160;----------------------------------------
{0,0,0,0,1,1,0,1,1,0,0,0,1,0,0,1,0,1,0}
{0,0,1,1,0,0,0,0,0,1,1,1,0,1,0,0,1,0,1}
{1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,2,0,0,0}
{0,1,0,0,0,2,0,0,1,1,0,0,0,0,0,2,1,0,0}
</pre><p>Note that the output of madlib.svec_sfv() is an svec for each document containing the count of each of the dictionary words in the ordinal positions of the dictionary. This can more easily be understood by lining up the feature vector and text like this:</p>
<pre class="example">
SELECT madlib.svec_sfv((SELECT a FROM features LIMIT 1),b)::float8[]
, b
FROM documents;
</pre><p> Result: </p><pre class="result">
svec_sfv | b
&#160;----------------------------------------+--------------------------------------------------
{1,0,0,0,1,1,1,1,0,0,0,0,0,0,1,2,0,0,0} | {i,am,the,second,document,in,the,corpus}
{0,1,0,0,0,2,0,0,1,1,0,0,0,0,0,2,1,0,0} | {the,document,before,me,is,the,third,document}
{0,0,0,0,1,1,0,1,1,0,0,0,1,0,0,1,0,1,0} | {this,is,one,document,in,the,corpus}
{0,0,1,1,0,0,0,0,0,1,1,1,0,1,0,0,1,0,1} | {being,third,never,really,bothered,me,until,now}
</pre> <pre class="example">
SELECT * FROM features;
</pre> <pre class="result">
a
&#160;-------------------------------------------------------------------------------------------------------
{am,before,being,bothered,corpus,document,i,in,is,me,never,now,one,really,second,the,third,this,until}
</pre><p>Now when we look at the document "i am the second document in the corpus", its SFV is {1,3*0,1,1,1,1,6*0,1,2}. The word "am" is the first ordinate in the dictionary and there is 1 instance of it in the SFV. The word "before" has no instances in the document, so its value is "0" and so on.</p>
<p>The function madlib.svec_sfv() can process large numbers of documents into their SFVs in parallel at high speed.</p>
<p>The rest of the categorization process is all vector math. The actual count is hardly ever used. Instead, it's turned into a weight. The most common weight is called tf/idf for Term Frequency / Inverse Document Frequency. The calculation for a given term in a given document is</p>
<pre class="example">
{#Times in document} * log {#Documents / #Documents the term appears in}.
</pre><p>For instance, the term "document" in document A would have weight 1 * log (4/3). In document D, it would have weight 2 * log (4/3). Terms that appear in every document would have tf/idf weight 0, since log (4/4) = log(1) = 0. (Our example has no term like that.) That usually sends a lot of values to 0.</p>
<p>For this part of the processing, we'll need to have a sparse vector of the dictionary dimension (19) with the values </p><pre class="example">
log(#documents/#Documents each term appears in).
</pre><p> There will be one such vector for the whole list of documents (aka the "corpus"). The #documents is just a count of all of the documents, in this case 4, but there is one divisor for each dictionary word and its value is the count of all the times that word appears in the document. This single vector for the whole corpus can then be scalar product multiplied by each document SFV to produce the Term Frequency/Inverse Document Frequency weights.</p>
<p>This can be done as follows: </p><pre class="example">
CREATE TABLE corpus AS
(SELECT a, madlib.svec_sfv((SELECT a FROM features LIMIT 1),b) sfv
FROM documents);
CREATE TABLE weights AS
(SELECT a docnum, madlib.svec_mult(sfv, logidf) tf_idf
FROM (SELECT madlib.svec_log(madlib.svec_div(count(sfv)::madlib.svec,madlib.svec_count_nonzero(sfv))) logidf
FROM corpus) foo, corpus ORDER BYdocnum);
SELECT * FROM weights;
</pre><p> Result </p><pre class="result">
docnum | tf_idf
&#160;------+----------------------------------------------------------------------
1 | {4,1,1,1,2,3,1,2,1,1,1,1}:{0,0.69,0.28,0,0.69,0,1.38,0,0.28,0,1.38,0}
2 | {1,3,1,1,1,1,6,1,1,3}:{1.38,0,0.69,0.28,1.38,0.69,0,1.38,0.57,0}
3 | {2,2,5,1,2,1,1,2,1,1,1}:{0,1.38,0,0.69,1.38,0,1.38,0,0.69,0,1.38}
4 | {1,1,3,1,2,2,5,1,1,2}:{0,1.38,0,0.57,0,0.69,0,0.57,0.69,0}
</pre><p>We can now get the "angular distance" between one document and the rest of the documents using the ACOS of the dot product of the document vectors: The following calculates the angular distance between the first document and each of the other documents: </p><pre class="example">
SELECT docnum,
180. * ( ACOS( madlib.svec_dmin( 1., madlib.svec_dot(tf_idf, testdoc)
/ (madlib.svec_l2norm(tf_idf)*madlib.svec_l2norm(testdoc))))/3.141592654) angular_distance
FROM weights,(SELECT tf_idf testdoc FROM weights WHERE docnum = 1 LIMIT 1) foo
ORDER BY 1;
</pre><p> Result: </p><pre class="result">
docnum | angular_distance
&#160;-------+------------------
1 | 0
2 | 78.8235846096986
3 | 89.9999999882484
4 | 80.0232034288617
</pre><p>We can see that the angular distance between document 1 and itself is 0 degrees and between document 1 and 3 is 90 degrees because they share no features at all. The angular distance can now be plugged into machine learning algorithms that rely on a distance measure between data points.</p>
<p>SVEC also provides functionality for declaring array given an array of positions and array of values, intermediate values betweens those are declared to be base value that user provides in the same function call. In the example below the fist array of integers represents the positions for the array two (array of floats). Positions do not need to come in the sorted order. Third value represents desired maximum size of the array. This assures that array is of that size even if last position is not. If max size &lt; 1 that value is ignored and array will end at the last position in the position vector. Final value is a float representing the base value to be used between the declared ones (0 would be a common candidate):</p>
<pre class="example">
SELECT madlib.svec_cast_positions_float8arr(ARRAY[1,2,7,5,87],ARRAY[.1,.2,.7,.5,.87],90,0.0);
</pre><p> Result: </p><pre class="result">
svec_cast_positions_float8arr
&#160;----------------------------------------------------
{1,1,2,1,1,1,79,1,3}:{0.1,0.2,0,0.5,0,0.7,0,0.87,0}
(1 row)
</pre><p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
<p>Other examples of svecs usage can be found in the k-means module, <a class="el" href="group__grp__kmeans.html">k-Means Clustering</a>.</p>
<p>File <a class="el" href="svec_8sql__in.html" title="SQL type definitions and functions for sparse vector data type svec ">svec.sql_in</a> documenting the SQL functions.</p>
</div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="footer">Generated on Wed Mar 31 2021 20:45:47 for MADlib by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
</ul>
</div>
</body>
</html>