| <!-- HTML header for doxygen 1.8.4--> |
| <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> |
| <meta http-equiv="X-UA-Compatible" content="IE=9"/> |
| <meta name="generator" content="Doxygen 1.8.13"/> |
| <meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/> |
| <title>MADlib: Latent Dirichlet Allocation</title> |
| <link href="tabs.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="jquery.js"></script> |
| <script type="text/javascript" src="dynsections.js"></script> |
| <link href="navtree.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="resize.js"></script> |
| <script type="text/javascript" src="navtreedata.js"></script> |
| <script type="text/javascript" src="navtree.js"></script> |
| <script type="text/javascript"> |
| $(document).ready(initResizable); |
| </script> |
| <link href="search/search.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="search/searchdata.js"></script> |
| <script type="text/javascript" src="search/search.js"></script> |
| <script type="text/javascript"> |
| $(document).ready(function() { init_search(); }); |
| </script> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"], |
| jax: ["input/TeX","output/HTML-CSS"], |
| }); |
| </script><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"></script> |
| <!-- hack in the navigation tree --> |
| <script type="text/javascript" src="eigen_navtree_hacks.js"></script> |
| <link href="doxygen.css" rel="stylesheet" type="text/css" /> |
| <link href="madlib_extra.css" rel="stylesheet" type="text/css"/> |
| <!-- google analytics --> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); |
| ga('create', 'UA-45382226-1', 'madlib.apache.org'); |
| ga('send', 'pageview'); |
| </script> |
| </head> |
| <body> |
| <div id="top"><!-- do not remove this div, it is closed by doxygen! --> |
| <div id="titlearea"> |
| <table cellspacing="0" cellpadding="0"> |
| <tbody> |
| <tr style="height: 56px;"> |
| <td id="projectlogo"><a href="http://madlib.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td> |
| <td style="padding-left: 0.5em;"> |
| <div id="projectname"> |
| <span id="projectnumber">1.16</span> |
| </div> |
| <div id="projectbrief">User Documentation for Apache MADlib</div> |
| </td> |
| <td> <div id="MSearchBox" class="MSearchBoxInactive"> |
| <span class="left"> |
| <img id="MSearchSelect" src="search/mag_sel.png" |
| onmouseover="return searchBox.OnSearchSelectShow()" |
| onmouseout="return searchBox.OnSearchSelectHide()" |
| alt=""/> |
| <input type="text" id="MSearchField" value="Search" accesskey="S" |
| onfocus="searchBox.OnSearchFieldFocus(true)" |
| onblur="searchBox.OnSearchFieldFocus(false)" |
| onkeyup="searchBox.OnSearchFieldChange(event)"/> |
| </span><span class="right"> |
| <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a> |
| </span> |
| </div> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <!-- end header part --> |
| <!-- Generated by Doxygen 1.8.13 --> |
| <script type="text/javascript"> |
| var searchBox = new SearchBox("searchBox", "search",false,'Search'); |
| </script> |
| </div><!-- top --> |
| <div id="side-nav" class="ui-resizable side-nav-resizable"> |
| <div id="nav-tree"> |
| <div id="nav-tree-contents"> |
| <div id="nav-sync" class="sync"></div> |
| </div> |
| </div> |
| <div id="splitbar" style="-moz-user-select:none;" |
| class="ui-resizable-handle"> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| $(document).ready(function(){initNavTree('group__grp__lda.html','');}); |
| </script> |
| <div id="doc-content"> |
| <!-- window showing the filter options --> |
| <div id="MSearchSelectWindow" |
| onmouseover="return searchBox.OnSearchSelectShow()" |
| onmouseout="return searchBox.OnSearchSelectHide()" |
| onkeydown="return searchBox.OnSearchSelectKey(event)"> |
| </div> |
| |
| <!-- iframe showing the search results (closed by default) --> |
| <div id="MSearchResultsWindow"> |
| <iframe src="javascript:void(0)" frameborder="0" |
| name="MSearchResults" id="MSearchResults"> |
| </iframe> |
| </div> |
| |
| <div class="header"> |
| <div class="headertitle"> |
| <div class="title">Latent Dirichlet Allocation<div class="ingroups"><a class="el" href="group__grp__unsupervised.html">Unsupervised Learning</a> » <a class="el" href="group__grp__topic__modelling.html">Topic Modelling</a></div></div> </div> |
| </div><!--header--> |
| <div class="contents"> |
| <div class="toc"><b>Contents</b> <ul> |
| <li> |
| <a href="#background">Background</a> </li> |
| <li> |
| <a href="#train">Training Function</a> </li> |
| <li> |
| <a href="#predict">Prediction Function</a> </li> |
| <li> |
| <a href="#perplexity">Perplexity</a> </li> |
| <li> |
| <a href="#helper">Helper Functions</a> </li> |
| <li> |
| <a href="#examples">Examples</a> </li> |
| <li> |
| <a href="#literature">Literature</a> </li> |
| <li> |
| <a href="#related">Related Topics</a></li> |
| <li> |
| </li> |
| </ul> |
| </div><p>Latent Dirichlet Allocation (LDA) is a generative probabilistic model for natural texts. It is used in problems such as automated topic discovery, collaborative filtering, and document classification.</p> |
| <p>In addition to an implementation of LDA, this MADlib module also provides a number of additional helper functions to interpret results of the LDA output.</p> |
| <dl class="section note"><dt>Note</dt><dd>Topic modeling is often used as part of a larger text processing pipeline, which may include operations such as term frequency, stemming and stop word removal. You can use the function <a href="group__grp__text__utilities.html">Term Frequency</a> to generate the required vocabulary format from raw documents for the LDA training function. See the examples later on this page for more details.</dd></dl> |
| <p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Background</dt><dd></dd></dl> |
| <p>The LDA model posits that each document is associated with a mixture of various topics (e.g., a document is related to Topic 1 with probability 0.7, and Topic 2 with probability 0.3), and that each word in the document is attributable to one of the document's topics. There is a (symmetric) Dirichlet prior with parameter \( \alpha \) on each document's topic mixture. In addition, there is another (symmetric) Dirichlet prior with parameter \( \beta \) on the distribution of words for each topic.</p> |
| <p>The following generative process then defines a distribution over a corpus of documents:</p> |
| <ul> |
| <li>Sample for each topic \( i \), a per-topic word distribution \( \phi_i \) from the Dirichlet( \(\beta\)) prior.</li> |
| <li>For each document:<ul> |
| <li>Sample a document length N from a suitable distribution, say, Poisson.</li> |
| <li>Sample a topic mixture \( \theta \) for the document from the Dirichlet( \(\alpha\)) distribution.</li> |
| <li>For each of the N words:<ul> |
| <li>Sample a topic \( z_n \) from the multinomial topic distribution \( \theta \).</li> |
| <li>Sample a word \( w_n \) from the multinomial word distribution \( \phi_{z_n} \) associated with topic \( z_n \).</li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| <p>In practice, only the words in each document are observable. The topic mixture of each document and the topic for each word in each document are latent unobservable variables that need to be inferred from the observables, and this is referred to as the inference problem for LDA. Exact inference is intractable, but several approximate inference algorithms for LDA have been developed. The simple and effective Gibbs sampling algorithm described in Griffiths and Steyvers [2] appears to be the current algorithm of choice.</p> |
| <p>This implementation provides a parallel and scalable in-database solution for LDA based on Gibbs sampling. It takes advantage of the shared-nothing MPP architecture and is a different implementation than one would find for MPI or map/reduce.</p> |
| <p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The LDA training function has the following syntax: <pre class="syntax"> |
| lda_train( data_table, |
| model_table, |
| output_data_table, |
| voc_size, |
| topic_num, |
| iter_num, |
| alpha, |
| beta |
| ) |
| </pre> <b>Arguments</b> <dl class="arglist"> |
| <dt>data_table </dt> |
| <dd><p class="startdd">TEXT. Name of the table storing the training dataset. Each row is in the form <code><docid, wordid, count></code> where <code>docid</code>, <code>wordid</code>, and <code>count</code> are non-negative integers. The <code>docid</code> column refers to the document ID, the <code>wordid</code> column is the word ID (the index of a word in the vocabulary), and <code>count</code> is the number of occurrences of the word in the document. Please note:</p> |
| <ul> |
| <li><code>wordid</code> must be contiguous integers going from from 0 to <code>voc_size</code> − <code>1</code>.</li> |
| <li>column names for <code>docid</code>, <code>wordid</code>, and <code>count</code> are currently fixed, so you must use these exact names in the data_table.</li> |
| </ul> |
| <p>The function <a href="group__grp__text__utilities.html">Term Frequency</a> can be used to generate vocabulary in the required format from raw documents. </p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>model_table </dt> |
| <dd>TEXT. This is an output table generated by LDA which contains the learned model. It has one row with the following columns: <table class="output"> |
| <tr> |
| <th>voc_size </th><td>INTEGER. Size of the vocabulary. As mentioned above for the input table, <code>wordid</code> consists of contiguous integers going from 0 to <code>voc_size</code> − <code>1</code>. </td></tr> |
| <tr> |
| <th>topic_num </th><td>INTEGER. Number of topics. </td></tr> |
| <tr> |
| <th>alpha </th><td>DOUBLE PRECISION. Dirichlet prior for the per-document topic multinomial. </td></tr> |
| <tr> |
| <th>beta </th><td>DOUBLE PRECISION. Dirichlet prior for the per-topic word multinomial. </td></tr> |
| <tr> |
| <th>model </th><td>BIGINT[]. The encoded model description (not human readable). </td></tr> |
| </table> |
| </dd> |
| <dt>output_data_table </dt> |
| <dd>TEXT. The name of the table generated by LDA that stores the output data. It has the following columns: <table class="output"> |
| <tr> |
| <th>docid </th><td>INTEGER. Document id from input 'data_table'. </td></tr> |
| <tr> |
| <th>wordcount </th><td>INTEGER. Count of number of words in the document, including repeats. For example, if a word appears 3 times in the document, it is counted 3 times. </td></tr> |
| <tr> |
| <th>words </th><td>INTEGER[]. Array of <code>wordid</code> in the document, not including repeats. For example, if a word appears 3 times in the document, it appears only once in the <code>words</code> array. </td></tr> |
| <tr> |
| <th>counts </th><td>INTEGER[]. Frequency of occurance of a word in the document, indexed the same as the <code>words</code> array above. For example, if the 2nd element of the <code>counts</code> array is 4, it means that the word in the 2nd element of the <code>words</code> array occurs 4 times in the document. </td></tr> |
| <tr> |
| <th>topic_count </th><td>INTEGER[]. Array of the count of words in the document that correspond to each topic. This array is of length <code>topic_num</code>. Topic ids are continuous integers going from 0 to <code>topic_num</code> − <code>1</code>. </td></tr> |
| <tr> |
| <th>topic_assignment </th><td>INTEGER[]. Array indicating which topic each word in the document corresponds to. This array is of length <code>wordcount</code>. Words that are repeated <code>n</code> times in the document will show up consecutively <code>n</code> times in this array. </td></tr> |
| </table> |
| </dd> |
| <dt>voc_size </dt> |
| <dd>INTEGER. Size of the vocabulary. As mentioned above for the input 'data_table', <code>wordid</code> consists of continuous integers going from 0 to <code>voc_size</code> − <code>1</code>. </dd> |
| <dt>topic_num </dt> |
| <dd>INTEGER. Desired number of topics. </dd> |
| <dt>iter_num </dt> |
| <dd>INTEGER. Desired number of iterations. </dd> |
| <dt>alpha </dt> |
| <dd>DOUBLE PRECISION. Dirichlet prior for the per-document topic multinomial (e.g., 50/topic_num is a reasonable value to start with as per Griffiths and Steyvers [2] ). </dd> |
| <dt>beta </dt> |
| <dd>DOUBLE PRECISION. Dirichlet prior for the per-topic word multinomial (e.g., 0.01 is a reasonable value to start with). </dd> |
| </dl> |
| </dd></dl> |
| <p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd></dd></dl> |
| <p>Prediction involves labelling test documents using a learned LDA model: </p><pre class="syntax"> |
| lda_predict( data_table, |
| model_table, |
| output_predict_table |
| ); |
| </pre><p> <b>Arguments</b> </p><dl class="arglist"> |
| <dt>data_table </dt> |
| <dd>TEXT. Name of the table storing the test dataset (new document to be labeled). </dd> |
| <dt>model_table </dt> |
| <dd>TEXT. The model table generated by the training process. </dd> |
| <dt>output_predict_table </dt> |
| <dd>TEXT. The prediction output table. Each row in the table stores the topic distribution and the topic assignments for a document in the dataset. This table has the exact same columns and interpretation as the 'output_data_table' from the training function above. </dd> |
| </dl> |
| <p><a class="anchor" id="perplexity"></a></p><dl class="section user"><dt>Perplexity</dt><dd>Perplexity describes how well the model fits the data by computing word likelihoods averaged over the test documents. This function returns a single perplexity value. <pre class="syntax"> |
| lda_get_perplexity( model_table, |
| output_predict_table |
| ); |
| </pre> <b>Arguments</b> <dl class="arglist"> |
| <dt>model_table </dt> |
| <dd>TEXT. The model table generated by the training process. </dd> |
| <dt>output_predict_table </dt> |
| <dd>TEXT. The prediction output table generated by the predict function above. </dd> |
| </dl> |
| </dd></dl> |
| <p><a class="anchor" id="helper"></a></p><dl class="section user"><dt>Helper Functions</dt><dd></dd></dl> |
| <p>The helper functions can help to interpret the output from LDA training and LDA prediction.</p> |
| <p><b>Topic description by top-k words with highest probability</b></p> |
| <p>Applies to LDA training only.</p> |
| <pre class="syntax"> |
| lda_get_topic_desc( model_table, |
| vocab_table, |
| output_table, |
| top_k |
| ) |
| </pre><p> <b>Arguments</b> </p><dl class="arglist"> |
| <dt>model_table </dt> |
| <dd>TEXT. The model table generated by the training process. </dd> |
| <dt>vocab_table </dt> |
| <dd>TEXT. The vocabulary table in the form <wordid, word>. Reminder that this table can be created using the <code>term_frequency</code> function (<a class="el" href="group__grp__text__utilities.html">Term Frequency</a>) with the parameter <code>compute_vocab</code> set to TRUE. </dd> |
| <dt>output_table </dt> |
| <dd>TEXT. The output table with per-topic description generated by this helper function. It has the following columns: <table class="output"> |
| <tr> |
| <th>topicid </th><td>INTEGER. Topic id. </td></tr> |
| <tr> |
| <th>wordid </th><td>INTEGER. Word id. </td></tr> |
| <tr> |
| <th>prob </th><td>DOUBLE PRECISION. Probability that this topic will generate the word. </td></tr> |
| <tr> |
| <th>word </th><td>TEXT. Word in text form. </td></tr> |
| </table> |
| </dd> |
| <dt>top_k </dt> |
| <dd>TEXT. The desired number of top words to show for each topic. </dd> |
| </dl> |
| <p><b>Per-word topic counts</b></p> |
| <p>Applies to LDA training only.</p> |
| <pre class="syntax"> |
| lda_get_word_topic_count( model_table, |
| output_table |
| ) |
| </pre><p> <b>Arguments</b> </p><dl class="arglist"> |
| <dt>model_table </dt> |
| <dd>TEXT. The model table generated by the training process. </dd> |
| <dt>output_table </dt> |
| <dd>TEXT. The output table with per-word topic counts generated by this helper function. It has the following columns: <table class="output"> |
| <tr> |
| <th>wordid </th><td>INTEGER. Word id. </td></tr> |
| <tr> |
| <th>topic_count </th><td>INTEGER[]. Count of word association with each topic, i.e., shows how many times a given word is assigned to a topic. Array is of length number of topics. </td></tr> |
| </table> |
| </dd> |
| </dl> |
| <p><b>Per-topic word counts</b></p> |
| <p>Applies to LDA training only.</p> |
| <pre class="syntax"> |
| lda_get_topic_word_count( model_table, |
| output_table |
| ) |
| </pre><p> <b>Arguments</b> </p><dl class="arglist"> |
| <dt>model_table </dt> |
| <dd>TEXT. The model table generated by the training process. </dd> |
| <dt>output_table </dt> |
| <dd>TEXT. The output table with per-topic word counts generated by this helper function. It has the following columns: <table class="output"> |
| <tr> |
| <th>topicid </th><td>INTEGER. Topic id. </td></tr> |
| <tr> |
| <th>word_count </th><td>INTEGER[]. Array showing which words are associated with the topic by frequency. Array is of length number of words. </td></tr> |
| </table> |
| </dd> |
| </dl> |
| <p><b>Per-document word to topic mapping</b></p> |
| <p>Applies to both LDA training and LDA prediction.</p> |
| <pre class="syntax"> |
| lda_get_word_topic_mapping( output_data_table, -- From training or prediction |
| output_table |
| ) |
| </pre><p> <b>Arguments</b> </p><dl class="arglist"> |
| <dt>output_data_table </dt> |
| <dd>TEXT. The output data table generated by either LDA training or LDA prediction. </dd> |
| <dt>output_table </dt> |
| <dd>TEXT. The output table with word to topic mappings generated by this helper function. It has the following columns: <table class="output"> |
| <tr> |
| <th>docid </th><td>INTEGER. Document id. </td></tr> |
| <tr> |
| <th>wordid </th><td>INTEGER. Word id. </td></tr> |
| <tr> |
| <th>topicid </th><td>INTEGER. Topic id. </td></tr> |
| </table> |
| </dd> |
| </dl> |
| <p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl> |
| <ol type="1"> |
| <li>Prepare a training dataset for LDA. The examples below are small strings extracted from various Wikipedia documents: <pre class="example"> |
| DROP TABLE IF EXISTS documents; |
| CREATE TABLE documents(docid INT4, contents TEXT); |
| INSERT INTO documents VALUES |
| (0, 'Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora.'), |
| (1, 'By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history.'), |
| (2, 'Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field.'), |
| (3, 'California''s diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area.'); |
| </pre> You can apply stemming, stop word removal and tokenization at this point in order to prepare the documents for text processing. Depending upon your database version, various tools are available. Databases based on more recent versions of PostgreSQL may do something like: <pre class="example"> |
| SELECT tsvector_to_array(to_tsvector('english',contents)) from documents; |
| </pre> <pre class="result"> |
| tsvector_to_array |
| +----------------------------------------------------------------------- |
| {analyz,bayesian,class,content,corpora,develop,document,larg,...} |
| {1960s,1968,301,american,averag,balanc,bat,carl,favor,histori,...} |
| {also,applic,close,comput,deliv,disciplin,domain,field,learn,...} |
| {agricultur,area,california,center,central,coast,desert,divers,...} |
| (4 rows) |
| </pre> In this example, we assume a database based on an older version of PostgreSQL and just perform basic punctuation removal and tokenization. The array of words is added as a new column to the documents table: <pre class="example"> |
| ALTER TABLE documents ADD COLUMN words TEXT[]; |
| UPDATE documents SET words = |
| regexp_split_to_array(lower( |
| regexp_replace(contents, E'[,.;\']','', 'g') |
| ), E'[\\s+]'); |
| SELECT * FROM documents ORDER BY docid; |
| </pre> <pre class="result"> |
| -[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| docid | 0 |
| contents | Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora. |
| words | {statistical,topic,models,are,a,class,of,bayesian,latent,variable,models,originally,developed,for,analyzing,the,semantic,content,of,large,document,corpora} |
| -[ RECORD 2 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| docid | 1 |
| contents | By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history. |
| words | {by,the,late,1960s,the,balance,between,pitching,and,hitting,had,swung,in,favor,of,the,pitchers,in,1968,carl,yastrzemski,won,the,american,league,batting,title,with,an,average,of,just,301,the,lowest,in,history} |
| -[ RECORD 3 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| docid | 2 |
| contents | Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. |
| words | {machine,learning,is,closely,related,to,and,often,overlaps,with,computational,statistics,a,discipline,that,also,specializes,in,prediction-making,it,has,strong,ties,to,mathematical,optimization,which,deliver,methods,theory,and,application,domains,to,the,field} |
| -[ RECORD 4 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| docid | 3 |
| contents | California's diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area. |
| words | {californias,diverse,geography,ranges,from,the,sierra,nevada,in,the,east,to,the,pacific,coast,in,the,west,from,the,redwood–douglas,fir,forests,of,the,northwest,to,the,mojave,desert,areas,in,the,southeast,the,center,of,the,state,is,dominated,by,the,central,valley,a,major,agricultural,area} |
| </pre></li> |
| <li>Build a word count table by extracting the words and building a histogram for each document using the <code>term_frequency</code> function (<a class="el" href="group__grp__text__utilities.html">Term Frequency</a>). <pre class="example"> |
| DROP TABLE IF EXISTS documents_tf, documents_tf_vocabulary; |
| SELECT madlib.term_frequency('documents', -- input table |
| 'docid', -- document id column |
| 'words', -- vector of words in document |
| 'documents_tf', -- output documents table with term frequency |
| TRUE); -- TRUE to created vocabulary table |
| SELECT * FROM documents_tf ORDER BY docid LIMIT 20; |
| </pre> <pre class="result"> |
| docid | wordid | count |
| -------+--------+------- |
| 0 | 71 | 1 |
| 0 | 90 | 1 |
| 0 | 56 | 1 |
| 0 | 68 | 2 |
| 0 | 85 | 1 |
| 0 | 28 | 1 |
| 0 | 35 | 1 |
| 0 | 54 | 1 |
| 0 | 64 | 2 |
| 0 | 8 | 1 |
| 0 | 29 | 1 |
| 0 | 80 | 1 |
| 0 | 24 | 1 |
| 0 | 11 | 1 |
| 0 | 17 | 1 |
| 0 | 32 | 1 |
| 0 | 3 | 1 |
| 0 | 42 | 1 |
| 0 | 97 | 1 |
| 0 | 95 | 1 |
| (20 rows) |
| </pre> Here is the associated vocabulary table. Note that wordid starts at 0: <pre class="example"> |
| SELECT * FROM documents_tf_vocabulary ORDER BY wordid LIMIT 20; |
| </pre> <pre class="result"> |
| wordid | word |
| --------+-------------- |
| 0 | 1960s |
| 1 | 1968 |
| 2 | 301 |
| 3 | a |
| 4 | agricultural |
| 5 | also |
| 6 | american |
| 7 | an |
| 8 | analyzing |
| 9 | and |
| 10 | application |
| 11 | are |
| 12 | area |
| 13 | areas |
| 14 | average |
| 15 | balance |
| 16 | batting |
| 17 | bayesian |
| 18 | between |
| 19 | by |
| (20 rows) |
| </pre> The total number of words in the vocabulary across all documents is: <pre class="example"> |
| SELECT COUNT(*) FROM documents_tf_vocabulary; |
| </pre> <pre class="result"> |
| count |
| +------ |
| 103 |
| (1 row) |
| </pre></li> |
| <li>Train LDA model. For Dirichlet priors we use initial rule-of-thumb values of 50/(number of topics) for alpha and 0.01 for beta. Reminder that column names for docid, wordid, and count are currently fixed, so you must use these exact names in the input table. After a successful run of the LDA training function two tables are generated, one for storing the learned model and the other for storing the output data table. <pre class="example"> |
| DROP TABLE IF EXISTS lda_model, lda_output_data; |
| SELECT madlib.lda_train( 'documents_tf', -- documents table in the form of term frequency |
| 'lda_model', -- model table created by LDA training (not human readable) |
| 'lda_output_data', -- readable output data table |
| 103, -- vocabulary size |
| 5, -- number of topics |
| 10, -- number of iterations |
| 5, -- Dirichlet prior for the per-doc topic multinomial (alpha) |
| 0.01 -- Dirichlet prior for the per-topic word multinomial (beta) |
| ); |
| SELECT * FROM lda_output_data ORDER BY docid; |
| </pre> <pre class="result"> |
| -[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 0 |
| wordcount | 22 |
| words | {24,17,11,95,90,85,68,54,42,35,28,8,3,97,80,71,64,56,32,29} |
| counts | {1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,1,1,1} |
| topic_count | {4,2,4,3,9} |
| topic_assignment | {4,2,4,1,2,1,2,2,0,3,4,4,3,0,0,4,0,4,4,4,3,4} |
| -[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 1 |
| wordcount | 37 |
| words | {1,50,49,46,19,16,14,9,7,0,90,68,57,102,101,100,93,88,75,74,59,55,53,48,39,21,18,15,6,2} |
| counts | {1,3,1,1,1,1,1,1,1,1,5,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1} |
| topic_count | {2,5,14,9,7} |
| topic_assignment | {0,3,3,3,1,4,2,2,2,1,3,1,2,2,2,2,2,2,2,1,4,3,2,0,4,2,4,2,3,4,3,1,3,4,3,2,4} |
| -[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 2 |
| wordcount | 36 |
| words | {10,27,33,40,47,51,58,62,63,69,72,83,100,99,94,92,91,90,89,87,86,79,76,70,60,52,50,36,30,25,9,5,3} |
| counts | {1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1} |
| topic_count | {15,10,1,7,3} |
| topic_assignment | {0,3,1,3,0,0,3,3,1,0,1,0,0,0,0,1,1,0,4,2,0,4,1,0,1,0,0,4,3,3,3,0,1,1,1,0} |
| -[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 3 |
| wordcount | 49 |
| words | {77,78,81,82,67,65,51,45,44,43,34,26,13,98,96,94,90,84,73,68,66,61,50,41,38,37,31,23,22,20,19,12,4,3} |
| counts | {1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,11,1,1,2,1,1,3,1,1,1,1,1,1,1,1,1,1,1} |
| topic_count | {5,5,26,5,8} |
| topic_assignment | {4,4,4,0,2,0,0,2,4,4,2,2,2,1,2,4,1,0,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,4,3,3,3,2,3,2,3,2,1,4,2,2,1,0} |
| </pre></li> |
| <li>Review learned model using helper functions. First, we get topic description by top-k words. These are the k words with the highest probability for the topic. Note that if there are ties in probability, more than k words may actually be reported for each topic. Also note that topicid starts at 0: <pre class="example"> |
| DROP TABLE IF EXISTS helper_output_table; |
| SELECT madlib.lda_get_topic_desc( 'lda_model', -- LDA model generated in training |
| 'documents_tf_vocabulary', -- vocabulary table that maps wordid to word |
| 'helper_output_table', -- output table for per-topic descriptions |
| 5); -- k: number of top words for each topic |
| SELECT * FROM helper_output_table ORDER BY topicid, prob DESC LIMIT 40; |
| </pre> <pre class="result"> |
| topicid | wordid | prob | word |
| ---------+--------+--------------------+------------------- |
| 0 | 3 | 0.111357750647429 | a |
| 0 | 51 | 0.074361820199778 | is |
| 0 | 94 | 0.074361820199778 | to |
| 0 | 70 | 0.0373658897521273 | optimization |
| 0 | 82 | 0.0373658897521273 | southeast |
| 0 | 60 | 0.0373658897521273 | machine |
| 0 | 71 | 0.0373658897521273 | originally |
| 0 | 69 | 0.0373658897521273 | often |
| 0 | 99 | 0.0373658897521273 | which |
| 0 | 83 | 0.0373658897521273 | specializes |
| 0 | 1 | 0.0373658897521273 | 1968 |
| 0 | 97 | 0.0373658897521273 | variable |
| 0 | 25 | 0.0373658897521273 | closely |
| 0 | 93 | 0.0373658897521273 | title |
| 0 | 47 | 0.0373658897521273 | has |
| 0 | 65 | 0.0373658897521273 | mojave |
| 0 | 79 | 0.0373658897521273 | related |
| 0 | 89 | 0.0373658897521273 | that |
| 0 | 10 | 0.0373658897521273 | application |
| 0 | 100 | 0.0373658897521273 | with |
| 0 | 92 | 0.0373658897521273 | ties |
| 0 | 54 | 0.0373658897521273 | large |
| 1 | 94 | 0.130699088145897 | to |
| 1 | 9 | 0.130699088145897 | and |
| 1 | 5 | 0.0438558402084238 | also |
| 1 | 57 | 0.0438558402084238 | league |
| 1 | 49 | 0.0438558402084238 | hitting |
| 1 | 13 | 0.0438558402084238 | areas |
| 1 | 39 | 0.0438558402084238 | favor |
| 1 | 85 | 0.0438558402084238 | statistical |
| 1 | 95 | 0.0438558402084238 | topic |
| 1 | 0 | 0.0438558402084238 | 1960s |
| 1 | 76 | 0.0438558402084238 | prediction-making |
| 1 | 86 | 0.0438558402084238 | statistics |
| 1 | 84 | 0.0438558402084238 | state |
| 1 | 72 | 0.0438558402084238 | overlaps |
| 1 | 22 | 0.0438558402084238 | center |
| 1 | 4 | 0.0438558402084238 | agricultural |
| 1 | 63 | 0.0438558402084238 | methods |
| 1 | 33 | 0.0438558402084238 | discipline |
| (40 rows) |
| </pre> Get the per-word topic counts. This mapping shows how many times a given word is assigned to a topic. E.g., wordid 3 is assigned to topicid 0 three times: <pre class="example"> |
| DROP TABLE IF EXISTS helper_output_table; |
| SELECT madlib.lda_get_word_topic_count( 'lda_model', -- LDA model generated in training |
| 'helper_output_table'); -- output table for per-word topic counts |
| SELECT * FROM helper_output_table ORDER BY wordid LIMIT 20; |
| </pre> <pre class="result"> |
| wordid | topic_count |
| --------+------------- |
| 0 | {0,1,0,0,0} |
| 1 | {1,0,0,0,0} |
| 2 | {1,0,0,0,0} |
| 3 | {3,0,0,0,0} |
| 4 | {0,0,0,0,1} |
| 5 | {0,1,0,0,0} |
| 6 | {1,0,0,0,0} |
| 7 | {0,0,0,1,0} |
| 8 | {0,1,0,0,0} |
| 9 | {0,0,0,3,0} |
| 10 | {1,0,0,0,0} |
| 11 | {1,0,0,0,0} |
| 12 | {0,0,1,0,0} |
| 13 | {0,0,0,0,1} |
| 14 | {0,1,0,0,0} |
| 15 | {0,0,0,0,1} |
| 16 | {0,1,0,0,0} |
| 17 | {0,0,1,0,0} |
| 18 | {1,0,0,0,0} |
| 19 | {2,0,0,0,0} |
| (20 rows) |
| </pre> Get the per-topic word counts. This mapping shows which words are associated with each topic by frequency: <pre class="example"> |
| DROP TABLE IF EXISTS topic_word_count; |
| SELECT madlib.lda_get_topic_word_count( 'lda_model', |
| 'topic_word_count'); |
| SELECT * FROM topic_word_count ORDER BY topicid; |
| </pre> <pre class="result"> |
| -[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| topicid | 1 |
| word_count | {1,1,0,0,0,0,0,1,1,0,1,0,0,0,0,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0} |
| -[ RECORD 2 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| topicid | 2 |
| word_count | {0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,1,2,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0,4,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,5,0,1,0,0,1,0,0,0} |
| -[ RECORD 3 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| topicid | 3 |
| word_count | {0,0,0,0,0,0,0,0,0,3,0,1,0,1,1,0,0,0,0,2,0,0,0,0,1,0,0,1,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,2,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0} |
| -[ RECORD 4 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| topicid | 4 |
| word_count | {0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,0,0,1,0,0,1,0,0,0,1,0,0,1,1,1,0,0,0,1,0,0,0,0,0,0,1,0,7,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,1,0,0,0,0,1,0,0,0,0,1,1,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,1} |
| -[ RECORD 5 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| topicid | 5 |
| word_count | {0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,18,0,0,0,0,0,0,0,1,0,2,0,0} |
| </pre> Get the per-document word to topic mapping: <pre class="example"> |
| DROP TABLE IF EXISTS helper_output_table; |
| SELECT madlib.lda_get_word_topic_mapping('lda_output_data', -- Output table from training |
| 'helper_output_table'); |
| SELECT * FROM helper_output_table ORDER BY docid LIMIT 40; |
| </pre> <pre class="result"> |
| docid | wordid | topicid |
| -------+--------+--------- |
| 0 | 56 | 1 |
| 0 | 54 | 1 |
| 0 | 42 | 2 |
| 0 | 35 | 1 |
| 0 | 32 | 1 |
| 0 | 29 | 3 |
| 0 | 28 | 4 |
| 0 | 24 | 3 |
| 0 | 17 | 2 |
| 0 | 11 | 0 |
| 0 | 8 | 1 |
| 0 | 3 | 0 |
| 0 | 97 | 0 |
| 0 | 95 | 3 |
| 0 | 90 | 0 |
| 0 | 85 | 0 |
| 0 | 80 | 2 |
| 0 | 71 | 2 |
| 0 | 68 | 0 |
| 0 | 64 | 1 |
| 1 | 2 | 0 |
| 1 | 1 | 0 |
| 1 | 0 | 1 |
| 1 | 102 | 4 |
| 1 | 101 | 2 |
| 1 | 100 | 1 |
| 1 | 93 | 3 |
| 1 | 90 | 2 |
| 1 | 90 | 0 |
| 1 | 88 | 1 |
| 1 | 75 | 1 |
| 1 | 74 | 3 |
| 1 | 68 | 0 |
| 1 | 59 | 2 |
| 1 | 57 | 4 |
| 1 | 55 | 3 |
| 1 | 53 | 3 |
| 1 | 50 | 0 |
| 1 | 49 | 1 |
| 1 | 48 | 0 |
| (40 rows) |
| </pre></li> |
| <li>Use a learned LDA model for prediction (that is, to label new documents). In this example, we use the same input table as we used to train, just for demonstration purpose. Normally, the test document is a new one that we want to predict on. <pre class="example"> |
| DROP TABLE IF EXISTS outdata_predict; |
| SELECT madlib.lda_predict( 'documents_tf', -- Document to predict |
| 'lda_model', -- LDA model from training |
| 'outdata_predict' -- Output table for predict results |
| ); |
| SELECT * FROM outdata_predict; |
| </pre> <pre class="result"> |
| -[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 0 |
| wordcount | 22 |
| words | {17,11,28,29,95,3,32,97,85,35,54,80,64,90,8,24,42,71,56,68} |
| counts | {1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2} |
| topic_count | {1,3,16,1,1} |
| topic_assignment | {2,2,1,0,2,2,2,3,2,2,2,2,2,2,4,2,2,2,2,2,1,1} |
| -[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 1 |
| wordcount | 37 |
| words | {90,101,2,88,6,7,75,46,74,68,39,9,48,49,102,50,59,53,55,57,100,14,15,16,18,19,93,21,0,1} |
| counts | {5,1,1,1,1,1,1,1,1,2,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1} |
| topic_count | {0,1,11,6,19} |
| topic_assignment | {4,4,4,4,4,4,4,4,4,2,4,2,2,1,3,2,2,4,4,4,3,3,3,4,3,3,2,4,4,2,2,4,2,4,2,4,2} |
| -[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 2 |
| wordcount | 36 |
| words | {90,3,5,9,10,25,27,30,33,36,40,47,50,51,52,58,60,62,63,69,70,72,76,79,83,86,87,89,91,92,94,99,100} |
| counts | {1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1} |
| topic_count | {26,3,5,1,1} |
| topic_assignment | {4,0,0,2,2,0,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0,2,0,2,0,0,0,0,0,1,1,1,0,0} |
| -[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------ |
| docid | 3 |
| wordcount | 49 |
| words | {41,38,3,77,78,94,37,81,82,19,84,34,96,13,31,98,90,51,26,61,23,22,50,65,66,67,45,44,68,4,12,43,20,73} |
| counts | {1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,11,1,1,1,1,1,3,1,1,1,1,2,2,1,1,1,1,1} |
| topic_count | {0,28,0,4,17} |
| topic_assignment | {1,1,4,1,1,1,1,1,1,4,1,1,1,3,1,1,1,4,4,4,4,4,4,4,4,4,4,4,4,1,1,1,4,3,3,3,1,1,4,4,1,1,1,1,1,1,1,1,1} |
| </pre> The test table is expected to be in the same form as the training table and can be created with the same process. The LDA prediction results have the same format as the output table generated by the LDA training function.</li> |
| <li>Review prediction using helper function. (This is the same per-document word to topic mapping that we used on the learned model.) <pre class="example"> |
| DROP TABLE IF EXISTS helper_output_table; |
| SELECT madlib.lda_get_word_topic_mapping('outdata_predict', -- Output table from prediction |
| 'helper_output_table'); |
| SELECT * FROM helper_output_table ORDER BY docid LIMIT 40; |
| </pre> <pre class="result"> |
| docid | wordid | topicid |
| -------+--------+--------- |
| 0 | 54 | 4 |
| 0 | 42 | 1 |
| 0 | 35 | 4 |
| 0 | 32 | 4 |
| 0 | 29 | 4 |
| 0 | 28 | 1 |
| 0 | 24 | 4 |
| 0 | 17 | 1 |
| 0 | 11 | 4 |
| 0 | 8 | 4 |
| 0 | 3 | 0 |
| 0 | 97 | 4 |
| 0 | 95 | 1 |
| 0 | 90 | 2 |
| 0 | 85 | 4 |
| 0 | 80 | 0 |
| 0 | 71 | 0 |
| 0 | 68 | 0 |
| 0 | 64 | 4 |
| 0 | 64 | 1 |
| 0 | 56 | 4 |
| 1 | 2 | 4 |
| 1 | 1 | 4 |
| 1 | 0 | 2 |
| 1 | 102 | 4 |
| 1 | 101 | 4 |
| 1 | 100 | 4 |
| 1 | 93 | 4 |
| 1 | 90 | 2 |
| 1 | 90 | 0 |
| 1 | 88 | 2 |
| 1 | 75 | 2 |
| 1 | 74 | 0 |
| 1 | 68 | 0 |
| 1 | 59 | 4 |
| 1 | 57 | 2 |
| 1 | 55 | 2 |
| 1 | 53 | 1 |
| 1 | 50 | 0 |
| 1 | 49 | 2 |
| (40 rows) |
| </pre></li> |
| <li>Call the perplexity function to see how well the model fits the data. Perplexity computes word likelihoods averaged over the test documents. <pre class="example"> |
| SELECT madlib.lda_get_perplexity( 'lda_model', -- LDA model from training |
| 'outdata_predict' -- Prediction output |
| ); |
| </pre> <pre class="result"> |
| lda_get_perplexity |
| +-------------------- |
| 79.481894411824 |
| (1 row) |
| </pre></li> |
| </ol> |
| <p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl> |
| <p>[1] D.M. Blei, A.Y. Ng, M.I. Jordan, <em>Latent Dirichlet Allocation</em>, Journal of Machine Learning Research, vol. 3, pp. 993-1022, 2003.</p> |
| <p>[2] T. Griffiths and M. Steyvers, <em>Finding scientific topics</em>, PNAS, vol. 101, pp. 5228-5235, 2004.</p> |
| <p>[3] Y. Wang, H. Bai, M. Stanton, W-Y. Chen, and E.Y. Chang, <em>lda: Parallel Dirichlet Allocation for Large-scale Applications</em>, AAIM, 2009.</p> |
| <p>[4] <a href="http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation">http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation</a></p> |
| <p>[5] J. Chang, Collapsed Gibbs sampling methods for topic models, R manual, 2010.</p> |
| <p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd>File <a class="el" href="lda_8sql__in.html" title="SQL functions for Latent Dirichlet Allocation. ">lda.sql_in</a> documenting the SQL functions. </dd></dl> |
| </div><!-- contents --> |
| </div><!-- doc-content --> |
| <!-- start footer part --> |
| <div id="nav-path" class="navpath"><!-- id is needed for treeview function! --> |
| <ul> |
| <li class="footer">Generated on Tue Jul 2 2019 22:35:52 for MADlib by |
| <a href="http://www.doxygen.org/index.html"> |
| <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li> |
| </ul> |
| </div> |
| </body> |
| </html> |