| <!-- HTML header for doxygen 1.8.4--> |
| <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> |
| <meta http-equiv="X-UA-Compatible" content="IE=9"/> |
| <meta name="generator" content="Doxygen 1.8.14"/> |
| <meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/> |
| <title>MADlib: Multinomial Regression</title> |
| <link href="tabs.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="jquery.js"></script> |
| <script type="text/javascript" src="dynsections.js"></script> |
| <link href="navtree.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="resize.js"></script> |
| <script type="text/javascript" src="navtreedata.js"></script> |
| <script type="text/javascript" src="navtree.js"></script> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(document).ready(initResizable); |
| /* @license-end */</script> |
| <link href="search/search.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="search/searchdata.js"></script> |
| <script type="text/javascript" src="search/search.js"></script> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(document).ready(function() { init_search(); }); |
| /* @license-end */ |
| </script> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| extensions: ["tex2jax.js", "TeX/AMSmath.js", "TeX/AMSsymbols.js"], |
| jax: ["input/TeX","output/HTML-CSS"], |
| }); |
| </script><script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js"></script> |
| <!-- hack in the navigation tree --> |
| <script type="text/javascript" src="eigen_navtree_hacks.js"></script> |
| <link href="doxygen.css" rel="stylesheet" type="text/css" /> |
| <link href="madlib_extra.css" rel="stylesheet" type="text/css"/> |
| <!-- google analytics --> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); |
| ga('create', 'UA-45382226-1', 'madlib.apache.org'); |
| ga('send', 'pageview'); |
| </script> |
| </head> |
| <body> |
| <div id="top"><!-- do not remove this div, it is closed by doxygen! --> |
| <div id="titlearea"> |
| <table cellspacing="0" cellpadding="0"> |
| <tbody> |
| <tr style="height: 56px;"> |
| <td id="projectlogo"><a href="http://madlib.apache.org"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td> |
| <td style="padding-left: 0.5em;"> |
| <div id="projectname"> |
| <span id="projectnumber">1.15.1</span> |
| </div> |
| <div id="projectbrief">User Documentation for Apache MADlib</div> |
| </td> |
| <td> <div id="MSearchBox" class="MSearchBoxInactive"> |
| <span class="left"> |
| <img id="MSearchSelect" src="search/mag_sel.png" |
| onmouseover="return searchBox.OnSearchSelectShow()" |
| onmouseout="return searchBox.OnSearchSelectHide()" |
| alt=""/> |
| <input type="text" id="MSearchField" value="Search" accesskey="S" |
| onfocus="searchBox.OnSearchFieldFocus(true)" |
| onblur="searchBox.OnSearchFieldFocus(false)" |
| onkeyup="searchBox.OnSearchFieldChange(event)"/> |
| </span><span class="right"> |
| <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a> |
| </span> |
| </div> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <!-- end header part --> |
| <!-- Generated by Doxygen 1.8.14 --> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| var searchBox = new SearchBox("searchBox", "search",false,'Search'); |
| /* @license-end */ |
| </script> |
| </div><!-- top --> |
| <div id="side-nav" class="ui-resizable side-nav-resizable"> |
| <div id="nav-tree"> |
| <div id="nav-tree-contents"> |
| <div id="nav-sync" class="sync"></div> |
| </div> |
| </div> |
| <div id="splitbar" style="-moz-user-select:none;" |
| class="ui-resizable-handle"> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(document).ready(function(){initNavTree('group__grp__multinom.html','');}); |
| /* @license-end */ |
| </script> |
| <div id="doc-content"> |
| <!-- window showing the filter options --> |
| <div id="MSearchSelectWindow" |
| onmouseover="return searchBox.OnSearchSelectShow()" |
| onmouseout="return searchBox.OnSearchSelectHide()" |
| onkeydown="return searchBox.OnSearchSelectKey(event)"> |
| </div> |
| |
| <!-- iframe showing the search results (closed by default) --> |
| <div id="MSearchResultsWindow"> |
| <iframe src="javascript:void(0)" frameborder="0" |
| name="MSearchResults" id="MSearchResults"> |
| </iframe> |
| </div> |
| |
| <div class="header"> |
| <div class="headertitle"> |
| <div class="title">Multinomial Regression<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> » <a class="el" href="group__grp__regml.html">Regression Models</a></div></div> </div> |
| </div><!--header--> |
| <div class="contents"> |
| <div class="toc"><b>Contents</b> <ul> |
| <li class="level1"> |
| <a href="#train">Training Function</a> </li> |
| <li class="level1"> |
| <a href="#predict">Prediction Function</a> </li> |
| <li class="level1"> |
| <a href="#examples">Examples</a> </li> |
| <li class="level1"> |
| <a href="#background">Technical Background</a> </li> |
| <li class="level1"> |
| <a href="#literature">Literature</a> </li> |
| <li class="level1"> |
| <a href="#related">Related Topics</a> </li> |
| </ul> |
| </div><p>In statistics, multinomial regression is a classification method that generalizes binomial regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).</p> |
| <p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>The multinomial regression training function has the following syntax: <pre class="syntax"> |
| multinom(source_table, |
| model_table, |
| dependent_varname, |
| independent_varname, |
| ref_category, |
| link_func, |
| grouping_col, |
| optim_params, |
| verbose |
| ) |
| </pre></dd></dl> |
| <p><b>Arguments</b> </p><dl class="arglist"> |
| <dt>source_table </dt> |
| <dd><p class="startdd">VARCHAR. Name of the table containing the training data.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>model_table </dt> |
| <dd><p class="startdd">VARCHAR. Name of the generated table containing the model.</p> |
| <p>The model table produced by multinom() contains the following columns:</p> |
| <table class="output"> |
| <tr> |
| <th><...> </th><td><p class="starttd">Grouping columns, if provided in input. This could be multiple columns depending on the <code>grouping_col</code> input. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>category </th><td><p class="starttd">VARCHAR. String representation of category value. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>coef </th><td><p class="starttd">FLOAT8[]. Vector of the coefficients in linear predictor. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>log_likelihood </th><td><p class="starttd">FLOAT8. The log-likelihood \( l(\boldsymbol \beta) \). The value will be the same across categories within the same group. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>std_err </th><td><p class="starttd">FLOAT8[]. Vector of the standard errors of the coefficients. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>z_stats </th><td><p class="starttd">FLOAT8[]. Vector of the z-statistics of the coefficients. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>p_values </th><td><p class="starttd">FLOAT8[]. Vector of the p-values of the coefficients. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>num_rows_processed </th><td><p class="starttd">BIGINT. Number of rows processed. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>num_rows_skipped </th><td><p class="starttd">BIGINT. Number of rows skipped due to missing values or failures. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>num_iterations </th><td>INTEGER. Number of iterations actually completed. This would be different from the <code>nIterations</code> argument if a <code>tolerance</code> parameter is provided and the algorithm converges before all iterations are completed. </td></tr> |
| </table> |
| <p>A summary table named <model_table>_summary is also created at the same time, which has the following columns: </p><table class="output"> |
| <tr> |
| <th>method </th><td><p class="starttd">VARCHAR. String describes the model: 'multinom'. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>source_table </th><td><p class="starttd">VARCHAR. Data source table name. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>model_table </th><td><p class="starttd">VARCHAR. Model table name. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>dependent_varname </th><td><p class="starttd">VARCHAR. Expression for dependent variable. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>independent_varname </th><td><p class="starttd">VARCHAR. Expression for independent variables. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>ref_category </th><td><p class="starttd">VARCHAR. String representation of reference category. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>link_func </th><td><p class="starttd">VARCHAR. String that contains link function parameters: only 'logit' is implemented now </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>grouping_col </th><td><p class="starttd">VARCHAR. String representation of grouping columns. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>optimizer_params </th><td><p class="starttd">VARCHAR. String that contains optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...'. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>num_all_groups </th><td><p class="starttd">INTEGER. Number of groups in glm training. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>num_failed_groups </th><td><p class="starttd">INTEGER. Number of failed groups in glm training. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>total_rows_processed </th><td><p class="starttd">BIGINT. Total number of rows processed in all groups. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>total_rows_skipped </th><td><p class="starttd">BIGINT. Total number of rows skipped in all groups due to missing values or failures. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| </table> |
| <p class="enddd"></p> |
| </dd> |
| <dt>dependent_varname </dt> |
| <dd><p class="startdd">VARCHAR. Name of the dependent variable column.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>independent_varname </dt> |
| <dd><p class="startdd">VARCHAR. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant <code>1</code> term in the independent variable list.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>link_function (optional) </dt> |
| <dd><p class="startdd">VARCHAR, default: 'logit'. Parameters for link function. Currently, we support logit. </p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>ref_category (optional) </dt> |
| <dd><p class="startdd">VARCHAR, default: '0'. Parameters to specify the reference category. </p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>grouping_col (optional) </dt> |
| <dd><p class="startdd">VARCHAR, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single model is generated.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>optim_params (optional) </dt> |
| <dd><p class="startdd">VARCHAR, default: 'max_iter=100,optimizer=irls,tolerance=1e-6'. Parameters for optimizer. Currently, we support tolerance=[tolerance for relative error between log-likelihoods], max_iter=[maximum iterations to run], optimizer=irls.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>verbose (optional) </dt> |
| <dd>BOOLEAN, default: FALSE. Provides verbose output of the results of training. </dd> |
| </dl> |
| <dl class="section note"><dt>Note</dt><dd>For p-values, we just return the computation result directly. Other statistical packages, like 'R', produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "< xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same. </dd></dl> |
| <p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>Multinomial regression prediction function has the following format: <pre class="syntax"> |
| multinom_predict(model_table, |
| predict_table_input, |
| output_table, |
| predict_type, |
| verbose, |
| id_column |
| ) |
| </pre> <b>Arguments</b> <dl class="arglist"> |
| <dt>model_table </dt> |
| <dd><p class="startdd">TEXT. Name of the generated table containing the model, which is the output table from multinom().</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>predict_table_input </dt> |
| <dd><p class="startdd">TEXT. The name of the table containing the data to predict on. The table must contain id column as the primary key.</p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>output_table </dt> |
| <dd><p class="startdd">TEXT. Name of the generated table containing the predicted values.</p> |
| <p>The model table produced by multinom_predict contains the following columns:</p> |
| <table class="output"> |
| <tr> |
| <th>id </th><td><p class="starttd">SERIAL. Column to identify the predicted value. </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>category </th><td><p class="starttd">TEXT. Available if the predicted type = 'response'. Column contains the predicted categories </p> |
| <p class="endtd"></p> |
| </td></tr> |
| <tr> |
| <th>category_value </th><td>FLOAT8. The predicted probability for the specific category_value. </td></tr> |
| </table> |
| <p class="enddd"></p> |
| </dd> |
| <dt>predict_type </dt> |
| <dd>TEXT. Either 'response' or 'probability'. Using 'response' will give the predicted category with the largest probability. Using probability will give the predicted probabilities for all categories </dd> |
| <dt>verbose </dt> |
| <dd><p class="startdd">BOOLEAN. Control whether verbose is displayed. The default is FALSE. </p> |
| <p class="enddd"></p> |
| </dd> |
| <dt>id_column </dt> |
| <dd>TEXT. The name of the column in the input table. </dd> |
| </dl> |
| </dd></dl> |
| <p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd></dd></dl> |
| <ol type="1"> |
| <li>Create the training data table. <pre class="example"> |
| DROP TABLE IF EXISTS test3; |
| CREATE TABLE test3 ( |
| feat1 INTEGER, |
| feat2 INTEGER, |
| cat INTEGER |
| ); |
| INSERT INTO test3(feat1, feat2, cat) VALUES |
| (1,35,1), |
| (2,33,0), |
| (3,39,1), |
| (1,37,1), |
| (2,31,1), |
| (3,36,0), |
| (2,36,1), |
| (2,31,1), |
| (2,41,1), |
| (2,37,1), |
| (1,44,1), |
| (3,33,2), |
| (1,31,1), |
| (2,44,1), |
| (1,35,1), |
| (1,44,0), |
| (1,46,0), |
| (2,46,1), |
| (2,46,2), |
| (3,49,1), |
| (2,39,0), |
| (2,44,1), |
| (1,47,1), |
| (1,44,1), |
| (1,37,2), |
| (3,38,2), |
| (1,49,0), |
| (2,44,0), |
| (3,61,2), |
| (1,65,2), |
| (3,67,1), |
| (3,65,2), |
| (1,65,2), |
| (2,67,2), |
| (1,65,2), |
| (1,62,2), |
| (3,52,2), |
| (3,63,2), |
| (2,59,2), |
| (3,65,2), |
| (2,59,0), |
| (3,67,2), |
| (3,67,2), |
| (3,60,2), |
| (3,67,2), |
| (3,62,2), |
| (2,54,2), |
| (3,65,2), |
| (3,62,2), |
| (2,59,2), |
| (3,60,2), |
| (3,63,2), |
| (3,65,2), |
| (2,63,1), |
| (2,67,2), |
| (2,65,2), |
| (2,62,2); |
| </pre></li> |
| <li>Run the multilogistic regression function. <pre class="example"> |
| DROP TABLE IF EXISTS test3_output; |
| DROP TABLE IF EXISTS test3_output_summary; |
| SELECT madlib.multinom('test3', |
| 'test3_output', |
| 'cat', |
| 'ARRAY[1, feat1, feat2]', |
| '0', |
| 'logit' |
| ); |
| </pre></li> |
| <li>View the regression results. <pre class="example"> |
| -- Set extended display on for easier reading of output |
| \x on |
| SELECT * FROM test3_output; |
| </pre></li> |
| </ol> |
| <p>Result: </p><pre class="result"> |
| -[ RECORD 1 ]------+------------------------------------------------------------ |
| category | 1 |
| coef | {1.45474045165731,0.084995618282504,-0.0172383499512136} |
| log_likelihood | -39.1475993094045 |
| std_err | {2.13085878785549,0.585023211942952,0.0431489262260687} |
| z_stats | {0.682701481650677,0.145285890452484,-0.399508202380224} |
| p_values | {0.494795493298706,0.884485154314181,0.689518781152604} |
| num_rows_processed | 57 |
| num_rows_skipped | 0 |
| iteration | 6 |
| -[ RECORD 2 ]------+------------------------------------------------------------ |
| category | 2 |
| coef | {-7.1290816775109,0.876487877074751,0.127886153038661} |
| log_likelihood | -39.1475993094045 |
| std_err | {2.52105418324135,0.639578886139654,0.0445760103748678} |
| z_stats | {-2.82781771407425,1.37041402721253,2.86894569440347} |
| p_values | {0.00468664844488755,0.170557695812408,0.00411842502754068} |
| num_rows_processed | 57 |
| num_rows_skipped | 0 |
| iteration | 6 |
| </pre><ol type="1"> |
| <li>Predicting dependent variable using multinomial model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)</li> |
| </ol> |
| <pre class="example"> |
| \x off |
| -- Add the id column for prediction function |
| ALTER TABLE test3 ADD COLUMN id SERIAL; |
| -- Predict probabilities for all categories using the original data |
| SELECT madlib.multinom_predict('test3_out','test3', 'test3_prd_prob', 'probability'); |
| -- Display the predicted value |
| SELECT * FROM test3_prd_prob; |
| </pre><p><a class="anchor" id="background"></a></p><dl class="section user"><dt>Technical Background</dt><dd>When link = 'logit', multinomial logistic regression models the outcomes of categorical dependent random variables (denoted \( Y \in \{ 0,1,2 \ldots k \} \)). The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of independent variables (usually denoted \( \boldsymbol x \)). That is, <p class="formulaDsp"> |
| \[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \] |
| </p> |
| for some unknown vector of coefficients \( \boldsymbol c \) and where \( \sigma(x) = \frac{1}{1 + \exp(-x)} \) is the logistic function. Multinomial logistic regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.</dd></dl> |
| <p>Let</p><ul> |
| <li>\( \boldsymbol y \in \{ 0,1 \}^{n \times k} \) denote the vector of observed dependent variables, with \( n \) rows and \( k \) columns, containing the observed values of the dependent variable,</li> |
| <li>\( X \in \mathbf R^{n \times k} \) denote the design matrix with \( k \) columns and \( n \) rows, containing all observed vectors of independent variables \( \boldsymbol x_i \) as rows.</li> |
| </ul> |
| <p>By definition, </p><p class="formulaDsp"> |
| \[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \] |
| </p> |
| <p> Maximizing the likelihood \( \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) \) is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) \), which simplifies to </p><p class="formulaDsp"> |
| \[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \] |
| </p> |
| <p> The Hessian of this objective is \( H = -X^T A X \) where \( A = \text{diag}(a_1, \dots, a_n) \) is the diagonal matrix with \( a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. \) Since \( H \) is non-positive definite, \( l(\boldsymbol c) \) is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use:</p><ul> |
| <li>Iteratively Reweighted Least Squares</li> |
| </ul> |
| <p>We estimate the standard error for coefficient \( i \) as </p><p class="formulaDsp"> |
| \[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \] |
| </p> |
| <p> The Wald z-statistic is </p><p class="formulaDsp"> |
| \[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \] |
| </p> |
| <p>The Wald \( p \)-value for coefficient \( i \) gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F \) denote the cumulative density function of a standard normal distribution, the Wald \( p \)-value for coefficient \( i \) is therefore </p><p class="formulaDsp"> |
| \[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \] |
| </p> |
| <p> where \( Z \) is a standard normally distributed random variable.</p> |
| <p>The odds ratio for coefficient \( i \) is estimated as \( \exp(c_i) \).</p> |
| <p>The condition number is computed as \( \kappa(X^T A X) \) during the iteration immediately <em>preceding</em> convergence (i.e., \( A \) is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.</p> |
| <p>The multinomial logistic regression uses a default reference category of zero, and the regression coefficients in the output are in the order described below. For a problem with \( K \) dependent variables \( (1, ..., K) \) and \( J \) categories \( (0, ..., J-1) \), let \( {m_{k,j}} \) denote the coefficient for dependent variable \( k \) and category \( j \). The output is \( {m_{k_1, j_0}, m_{k_1, j_1} \ldots m_{k_1, j_{J-1}}, m_{k_2, j_0}, m_{k_2, j_1}, \ldots m_{k_2, j_{J-1}} \ldots m_{k_K, j_{J-1}}} \). The order is NOT CONSISTENT with the multinomial regression marginal effect calculation with function <em>marginal_mlogregr</em>. This is deliberate because the interfaces of all multinomial regressions (robust, clustered, ...) will be moved to match that used in marginal.</p> |
| <p><a class="anchor" id="literature"></a></p><dl class="section user"><dt>Literature</dt><dd></dd></dl> |
| <p>A collection of nice write-ups, with valuable pointers into further literature:</p> |
| <p>[1] Annette J. Dobson: An Introduction to Generalized Linear Models, Second Edition. Nov 2001</p> |
| <p>[2] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, <a href="http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf">http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf</a></p> |
| <p>[3] Scott A. Czepiel: Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation, Retrieved Jul 12 2012, <a href="http://czep.net/stat/mlelr.pdf">http://czep.net/stat/mlelr.pdf</a></p> |
| <p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl> |
| <p>File <a class="el" href="multiresponseglm_8sql__in.html" title="SQL functions for multinomial regression. ">multiresponseglm.sql_in</a> documenting the multinomial regression functions</p> |
| <p><a class="el" href="group__grp__logreg.html">Logistic Regression</a></p> |
| <p><a class="el" href="group__grp__ordinal.html">Ordinal Regression</a></p> |
| </div><!-- contents --> |
| </div><!-- doc-content --> |
| <!-- start footer part --> |
| <div id="nav-path" class="navpath"><!-- id is needed for treeview function! --> |
| <ul> |
| <li class="footer">Generated on Mon Oct 15 2018 11:24:30 for MADlib by |
| <a href="http://www.doxygen.org/index.html"> |
| <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.14 </li> |
| </ul> |
| </div> |
| </body> |
| </html> |