blob: 076dcbcf324375fb7936654baf382bed72bb6782 [file] [log] [blame]
<!-- HTML header for doxygen 1.8.4-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.10"/>
<meta name="keywords" content="madlib,postgres,greenplum,machine learning,data mining,deep learning,ensemble methods,data science,market basket analysis,affinity analysis,pca,lda,regression,elastic net,huber white,proportional hazards,k-means,latent dirichlet allocation,bayes,support vector machines,svm"/>
<title>MADlib: Random Forest</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
$(document).ready(function() { init_search(); });
</script>
<!-- hack in the navigation tree -->
<script type="text/javascript" src="eigen_navtree_hacks.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
<link href="madlib_extra.css" rel="stylesheet" type="text/css"/>
<!-- google analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-45382226-1', 'madlib.net');
ga('send', 'pageview');
</script>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<td id="projectlogo"><a href="http://madlib.net"><img alt="Logo" src="madlib.png" height="50" style="padding-left:0.5em;" border="0"/ ></a></td>
<td style="padding-left: 0.5em;">
<div id="projectname">
<span id="projectnumber">1.9.1</span>
</div>
<div id="projectbrief">User Documentation for MADlib</div>
</td>
<td> <div id="MSearchBox" class="MSearchBoxInactive">
<span class="left">
<img id="MSearchSelect" src="search/mag_sel.png"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
alt=""/>
<input type="text" id="MSearchField" value="Search" accesskey="S"
onfocus="searchBox.OnSearchFieldFocus(true)"
onblur="searchBox.OnSearchFieldFocus(false)"
onkeyup="searchBox.OnSearchFieldChange(event)"/>
</span><span class="right">
<a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.10 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('group__grp__random__forest.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="header">
<div class="headertitle">
<div class="title">Random Forest<div class="ingroups"><a class="el" href="group__grp__super.html">Supervised Learning</a> &raquo; <a class="el" href="group__grp__tree.html">Tree Methods</a></div></div> </div>
</div><!--header-->
<div class="contents">
<div class="toc"><b>Contents</b></p><ul>
<li class="level1">
<a href="#train">Training Function</a> </li>
<li class="level1">
<a href="#predict">Prediction Function</a> </li>
<li class="level1">
<a href="#get_tree">Display Function</a> </li>
<li class="level1">
<a href="#examples">Examples</a> </li>
<li class="level1">
<a href="#related">Related Topics</a> </li>
</ul>
</div><p>Random forests build an ensemble of classifiers, each of which is a tree model constructed using bootstrapped samples from the input data. The results of these models are then combined to yield a single prediction, which, although at the expense of some loss in interpretation, have been found to be highly accurate. Such methods of using multiple Random Forests to make predictions are called random forest methods.</p>
<p><a class="anchor" id="train"></a></p><dl class="section user"><dt>Training Function</dt><dd>Random Forest training function has the following format: <pre class="syntax">
forest_train(training_table_name,
output_table_name,
id_col_name,
dependent_variable,
list_of_features,
list_of_features_to_exclude,
grouping_cols,
num_trees,
num_random_features,
importance,
num_permutations,
max_tree_depth,
min_split,
min_bucket,
num_splits,
surrogate_params,
verbose,
sample_ratio
)
</pre></dd></dl>
<p><b>Arguments</b> </p><dl class="arglist">
<dt>training_table_name </dt>
<dd><p class="startdd">text. the name of the table containing the training data.</p>
<p class="enddd"></p>
</dd>
<dt>output_table_name </dt>
<dd><p class="startdd">text. the name of the generated table containing the model.</p>
<p>The model table produced by the train function contains the following columns:</p>
<table class="output">
<tr>
<th>gid </th><td>integer. group id that uniquely identifies a set of grouping column values. </td></tr>
<tr>
<th>sample_id </th><td>integer. id of the bootstrap sample that this tree is a part of. </td></tr>
<tr>
<th>tree </th><td>bytea8. trained tree model stored in binary format. </td></tr>
</table>
<p>A summary table named <em>&lt;model_table&gt;_summary</em> is also created at the same time, which has the following columns: </p><table class="output">
<tr>
<th>method </th><td><p class="starttd">'forest_train' </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>is_classification </th><td><p class="starttd">boolean. True if it is a classification model. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>source_table </th><td><p class="starttd">text. The data source table name. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>model_table </th><td><p class="starttd">text. The model table name. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>id_col_name </th><td><p class="starttd">text. The ID column name. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>dependent_varname </th><td><p class="starttd">text. The dependent variable. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>independent_varname </th><td><p class="starttd">text. The independent variables </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>cat_features </th><td><p class="starttd">text. categorical feature names. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>con_features </th><td><p class="starttd">text. continuous feature names. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>grouping_col </th><td><p class="starttd">int. Names of grouping columns. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_trees </th><td><p class="starttd">int. Number of trees grown by the model. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_random_features </th><td><p class="starttd">int. Number of features randomly selected for each split. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>max_tree_depth </th><td><p class="starttd">int. Maximum depth of any tree in the random forest model_table. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>min_split </th><td><p class="starttd">int. Minimum number of observations in a node for it to be split. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>min_bucket </th><td><p class="starttd">int. minimum number of observations in any terminal node. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_splits </th><td><p class="starttd">int. number of buckets for continuous variables. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>verbose </th><td><p class="starttd">boolean. whether or not to display debug info. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>importance </th><td><p class="starttd">boolean. whether or not to calculate variable importance. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_permutations </th><td><p class="starttd">int. number of times feature values are permuted while calculating variable importance. The default value is 1. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_all_groups </th><td><p class="starttd">int. Number of groups during forest training. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>num_failed_groups </th><td><p class="starttd">int. Number of failed groups during forest training. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>total_rows_processed </th><td><p class="starttd">bigint. Total numbers of rows processed in all groups. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>total_rows_skipped </th><td><p class="starttd">bigint. Total numbers of rows skipped in all groups due to missing values or failures. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>dependent_var_levels </th><td><p class="starttd">itext. For classification, the distinct levels of the dependent variable. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>dependent_var_type </th><td>text. The type of dependent variable. </td></tr>
</table>
<p>A group table named <em> &lt;model_table&gt;_group</em> is created, which has the following columns: </p><table class="output">
<tr>
<th>gid </th><td><p class="starttd">integer. group id that uniquely identifies a set of grouping column values. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>&lt;...&gt; </th><td><p class="starttd">Same type as in the training data table. Grouping columns, if provided in input. This could be multiple columns depending on the <code>grouping_cols</code> input. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>success </th><td><p class="starttd">boolean. Indicator of the success of the group. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>cat_levels_in_text </th><td><p class="starttd">text[]. Ordered levels of categorical variables. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>cat_n_levels </th><td><p class="starttd">integer[]. Number of levels for each categorical variable. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>oob_error </th><td><p class="starttd">double precision. Out-of-bag error for the random forest model. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>cat_var_importance </th><td><p class="starttd">double precision[]. Variable importance for categorical features. The order corresponds to the order of the variables as found in cat_features in <em> &lt;model_table&gt;_summary</em>. </p>
<p class="endtd"></p>
</td></tr>
<tr>
<th>con_var_importance </th><td><p class="starttd">double precision[]. Variable importance for continuous features. The order corresponds to the order of the variables as found in con_features in <em> &lt;model_table&gt;_summary</em>. </p>
<p class="endtd"></p>
</td></tr>
</table>
<p class="enddd"></p>
</dd>
<dt>id_col_name </dt>
<dd><p class="startdd">text. Name of the column containing id information in the training data.</p>
<p class="enddd"></p>
</dd>
<dt>dependent_variable </dt>
<dd><p class="startdd">text. Name of the column that contains the output for training. Boolean, integer and text are considered classification outputs, while float values are considered regression outputs.</p>
<p class="enddd"></p>
</dd>
<dt>list_of_features </dt>
<dd><p class="startdd">text. Comma-separated string of column names to use as predictors. Can also be a '*' implying all columns are to be used as predictors (except the ones included in the next argument). Boolean, integer and text columns are considered categorical columns.</p>
<p class="enddd"></p>
</dd>
<dt>list_of_features_to_exclude </dt>
<dd><p class="startdd">text. Comma-separated string of column names to exclude from the predictors list. If the <em>dependent_variable</em> argument is an expression (including cast of a column name), then this list should include the columns that are included in the <em>dependent_variable</em> expression, otherwise those columns will be included in the features (resulting in meaningless trees).</p>
<p class="enddd"></p>
</dd>
<dt>grouping_cols (optional) </dt>
<dd><p class="startdd">text, default: NULL. Comma-separated list of column names to group the data by. This will lead to creating multiple random forests, one for each group.</p>
<p class="enddd"></p>
</dd>
<dt>num_trees (optional) </dt>
<dd><p class="startdd">integer, default: 100. Maximum number of trees to grow in the Random Forest model. Actual number of trees grown may be slighlty different.</p>
<p class="enddd"></p>
</dd>
<dt>num_random_features (optional) </dt>
<dd><p class="startdd">integer, default: sqrt(n) if classification tree, otherwise n/3. Number of features to randomly select at each split.</p>
<p class="enddd"></p>
</dd>
<dt>importance (optional) </dt>
<dd><p class="startdd">boolean, default: true. Whether or not to calculate variable importance.</p>
<p class="enddd"></p>
</dd>
<dt>num_permutations (optional) </dt>
<dd><p class="startdd">integer, default: 1. Number of times to permute each feature value while calculating variable importance.</p>
<p>Variable importance for a feature is computed by permuting the variable with random values and computing the drop in predictive accuracy (using OOB samples). Setting this greater than 1 performs an average over multiple importance calculation. This increases the total run time and in most cases the default value of 1 is sufficient to compute the importance. </p>
<p class="enddd"></p>
</dd>
<dt>max_depth (optional) </dt>
<dd><p class="startdd">integer, default: 10. Maximum depth of any node of a tree, with the root node counted as depth 0.</p>
<p class="enddd"></p>
</dd>
<dt>min_split (optional) </dt>
<dd><p class="startdd">integer, default: 20. Minimum number of observations that must exist in a node for a split to be attempted.</p>
<p class="enddd"></p>
</dd>
<dt>min_bucket (optional) </dt>
<dd><p class="startdd">integer, default: min_split/3. Minimum number of observations in any terminal node. If only one of min_bucket or min_split is specified, min_split is set to min_bucket*3 or min_bucket to min_split/3, as appropriate.</p>
<p class="enddd"></p>
</dd>
<dt>num_splits (optional) </dt>
<dd><p class="startdd">integer, default: 100. Continuous-valued features are binned into discrete quantiles to compute split boundaries. This global parameter is used to compute the resolution of splits for continuous features. Higher number of bins will lead to better prediction, but will also result in higher processing time.</p>
<p class="enddd"></p>
</dd>
<dt>surrogate_params (optional) </dt>
<dd><p class="startdd">text, Comma-separated string of key-value pairs controlling the behavior of surrogate splits for each node in a tree. </p><table class="output">
<tr>
<th>max_surrogates </th><td>Default: 0. Number of surrogates to store for each node. </td></tr>
</table>
<p class="enddd"></p>
</dd>
<dt>verbose (optional) </dt>
<dd><p class="startdd">boolean, default: FALSE. Provides verbose output of the results of training.</p>
<p class="enddd"></p>
</dd>
<dt>sample_ratio (optional) </dt>
<dd><p class="startdd">double precision, in the range of (0, 1], default: 1. If sample_ratio is less than 1, a bootstrap sample size smaller than the data table is expected to be used for training each tree in the forest. A ratio that is close to 0 may result in trees with only the root node. This allows users to experiment with the function in a speedy fashion.</p>
<dl class="section note"><dt>Note</dt><dd>The main parameters that affect memory usage are: depth of tree, number of features, and number of values per feature. If you are hitting VMEM limits, consider reducing one or more of these parameters.</dd></dl>
</dd>
</dl>
<p><a class="anchor" id="predict"></a></p><dl class="section user"><dt>Prediction Function</dt><dd>The prediction function is provided to estimate the conditional mean given a new predictor. It has the following syntax: <pre class="syntax">
forest_predict(random_forest_model,
new_data_table,
output_table,
type)
</pre></dd></dl>
<p><b>Arguments</b> </p><dl class="arglist">
<dt>forest_model </dt>
<dd><p class="startdd">text. Name of the table containing the Random Forest model.</p>
<p class="enddd"></p>
</dd>
<dt>new_data_table </dt>
<dd><p class="startdd">text. Name of the table containing prediction data.</p>
<p class="enddd"></p>
</dd>
<dt>output_table </dt>
<dd><p class="startdd">text. Name of the table to output prediction results to.</p>
<p class="enddd"></p>
</dd>
<dt>type </dt>
<dd>text, optional, default: 'response'. For regression models, the output is always the predicted value of the dependent variable. For classification models, the <em>type</em> variable can be 'response', giving the classification prediction as output, or 'prob', giving the class probabilities as output. For each value of the dependent variable, a column with the probabilities is added to the output table. </dd>
</dl>
<p><a class="anchor" id="get_tree"></a></p><dl class="section user"><dt>Display Function</dt><dd>The get_tree function is provided to output a graph representation of a single tree of the Random Forest. The output can either be in the popular 'dot' format that can be visualized using various programs including those in the GraphViz package, or in a simple text format. The details of the text format is outputted with the tree. <pre class="syntax">
get_tree(forest_model_table,
gid,
sample_id,
dot_format)
</pre></dd></dl>
<p>An additional display function is provided to output the surrogate splits chosen for each internal node. </p><pre class="syntax">
get_tree_surr(forest_model_table,
gid,
sample_id)
</pre><p>The output contains the list of surrogate splits for each internal node of a tree. The nodes are sorted in ascending order by id. This is equivalent to viewing the tree in a breadth-first manner. For each surrogate, the output gives the surrogate split (variable and threshold) and also provides the number of rows that were common between the primary split and the surrogate split. Finally, the number of rows present in the majority branch of the primary split is also presented. Only surrogates that perform better than this majority branch are used. When the primary variable has a NULL value the surrogate variables are used in order to compute the split for that node. If all surrogates variables are NULL, then the majority branch is used to compute the split for a tuple.</p>
<p><b>Arguments</b> </p><dl class="arglist">
<dt>forest_model_table </dt>
<dd><p class="startdd">text. Name of the table containing the Random Forest model.</p>
<p class="enddd"></p>
</dd>
<dt>gid </dt>
<dd><p class="startdd">integer. Id of the group that this tree is a part of.</p>
<p class="enddd"></p>
</dd>
<dt>sample_id </dt>
<dd><p class="startdd">integer. Id of the bootstrap sample that this tree if a part of.</p>
<p class="enddd"></p>
</dd>
<dt>dot_format </dt>
<dd>boolean, default = TRUE. Output can either be in a dot format or a text format. If TRUE, the result is in the dot format, else output is in text format. </dd>
</dl>
<p>The output is always returned as a 'TEXT'. For the dot format, the output can be redirected to a file on the client side and then rendered using visualization programs.</p>
<p><a class="anchor" id="examples"></a></p><dl class="section user"><dt>Examples</dt><dd><b>Note:</b> The output results may vary due the random nature of random forests.</dd></dl>
<p><b>Random Forest Classification Example</b></p>
<ol type="1">
<li>Prepare input data. <pre class="example">
DROP TABLE IF EXISTS dt_golf;
CREATE TABLE dt_golf (
id integer NOT NULL,
"OUTLOOK" text,
temperature double precision,
humidity double precision,
windy text,
class text
) ;
</pre> <pre class="example">
INSERT INTO dt_golf (id,"OUTLOOK",temperature,humidity,windy,class) VALUES
(1, 'sunny', 85, 85, 'false', 'Don''t Play'),
(2, 'sunny', 80, 90, 'true', 'Don''t Play'),
(3, 'overcast', 83, 78, 'false', 'Play'),
(4, 'rain', 70, 96, 'false', 'Play'),
(5, 'rain', 68, 80, 'false', 'Play'),
(6, 'rain', 65, 70, 'true', 'Don''t Play'),
(7, 'overcast', 64, 65, 'true', 'Play'),
(8, 'sunny', 72, 95, 'false', 'Don''t Play'),
(9, 'sunny', 69, 70, 'false', 'Play'),
(10, 'rain', 75, 80, 'false', 'Play'),
(11, 'sunny', 75, 70, 'true', 'Play'),
(12, 'overcast', 72, 90, 'true', 'Play'),
(13, 'overcast', 81, 75, 'false', 'Play'),
(14, 'rain', 71, 80, 'true', 'Don''t Play');
</pre></li>
<li>Run Random Forest train function. <pre class="example">
DROP TABLE IF EXISTS train_output, train_output_group, train_output_summary;
SELECT madlib.forest_train('dt_golf', -- source table
'train_output', -- output model table
'id', -- id column
'class', -- response
'"OUTLOOK", temperature, humidity, windy', -- features
NULL, -- exclude columns
NULL, -- grouping columns
20::integer, -- number of trees
2::integer, -- number of random features
TRUE::boolean, -- variable importance
1::integer, -- num_permutations
8::integer, -- max depth
3::integer, -- min split
1::integer, -- min bucket
10::integer -- number of splits per continuous variable
);
\x on
SELECT * FROM train_output_summary;
SELECT * FROM train_output_group;
\x off
</pre></li>
<li>Obtain a dot format display of a single tree within the forest. <pre class="example">
SELECT madlib.get_tree('train_output',1,2);
</pre> Result: <pre class="result">
digraph "Classification tree for dt_golf" {
"0" [label="temperature&lt;=70", shape=ellipse];
"0" -&gt; "1"[label="yes"];
"1" [label="\"'Play'"",shape=box];
"0" -&gt; "2"[label="no"];
"2" [label=""OUTLOOK"&lt;={overcast}", shape=ellipse];
"2" -&gt; "5"[label="yes"];
"5" [label=""'Play'"",shape=box];
"2" -&gt; "6"[label="no"];
"6" [label="humidity&lt;=70", shape=ellipse];
"6" -&gt; "13"[label="yes"];
"13" [label=""'Play'"",shape=box];
"6" -&gt; "14"[label="no"];
"14" [label=""'Don''t Play'"",shape=box];
} //---end of digraph---------
</pre></li>
<li>Obtain a text display of the tree <pre class="example">
SELECT madlib.get_tree('train_output',1,2,FALSE);
</pre> Result: <pre class="result">
&#160;-------------------------------------
&#160;- Each node represented by 'id' inside ().
&#160;- Leaf nodes have a * while internal nodes have the split condition at the end.
&#160;- For each internal node (i), it's children will be at (2i+1) and (2i+2).
&#160;- For each split the first indented child (2i+1) is the 'True' node and
second indented child (2i+2) is the 'False' node.
&#160;- Number of (weighted) rows for each response variable inside [].
&#160;- Order of values = ['"Don\'t Play"', '"Play"']
&#160;-------------------------------------
(0)[ 3 11] temperature&lt;=70
(1)[ 0 7] * --&gt; "'Play'"
(2)[ 3 4] "OUTLOOK"&lt;={overcast}
(5)[ 0 3] * --&gt; "'Play'"
(6)[ 3 1] humidity&lt;=70
(13)[ 0 1] * --&gt; "'Play'"
(14)[ 3 0] * --&gt; "'Don''t Play'"
&#160;-------------------------------------
</pre></li>
<li>Predict output categories for the same data as was used for input. <pre class="example">
DROP TABLE IF EXISTS prediction_results;
SELECT madlib.forest_predict('train_output',
'dt_golf',
'prediction_results',
'response');
\x off
SELECT id, estimated_class, class
FROM prediction_results JOIN dt_golf USING (id)
ORDER BY id;
</pre> Result: <pre class="result">
id | estimated_class | class
----+-----------------+------------
1 | Don't Play | Don't Play
2 | Don't Play | Don't Play
3 | Play | Play
4 | Play | Play
5 | Play | Play
6 | Don't Play | Don't Play
7 | Play | Play
8 | Don't Play | Don't Play
9 | Play | Play
10 | Play | Play
11 | Play | Play
12 | Play | Play
13 | Play | Play
14 | Don't Play | Don't Play
(14 rows)
</pre></li>
<li>Predict probablities of output categories for the same data. <pre class="example">
DROP TABLE IF EXISTS prediction_prob;
SELECT madlib.forest_predict('train_output',
'dt_golf',
'prediction_prob',
'prob');
\x off
SELECT id, "estimated_prob_Play", class
FROM prediction_prob JOIN dt_golf USING (id)
ORDER BY id;
</pre> Result: <pre class="result">
id | estimated_prob_Play | class
----+---------------------+------------
1 | 0.15 | Don't Play
2 | 0.1 | Don't Play
3 | 0.95 | Play
4 | 0.7 | Play
5 | 0.85 | Play
6 | 0.25 | Don't Play
7 | 0.75 | Play
8 | 0.1 | Don't Play
9 | 0.85 | Play
10 | 0.7 | Play
11 | 0.35 | Play
12 | 0.75 | Play
13 | 0.95 | Play
14 | 0.15 | Don't Play
(14 rows)
</pre></li>
</ol>
<p><b>Random Forest Regression Example</b></p>
<ol type="1">
<li>Prepare input data. <pre class="example">
DROP TABLE IF EXISTS mt_cars;
CREATE TABLE mt_cars (
id integer NOT NULL,
mpg double precision,
cyl integer,
disp double precision,
hp integer,
drat double precision,
wt double precision,
qsec double precision,
vs integer,
am integer,
gear integer,
carb integer
) ;
</pre> <pre class="example">
INSERT INTO mt_cars (id,mpg,cyl,disp,hp,drat,wt,qsec,vs,am,gear,carb) VALUES
(1,18.7,8,360,175,3.15,3.44,17.02,0,0,3,2),
(2,21,6,160,110,3.9,2.62,16.46,0,1,4,4),
(3,24.4,4,146.7,62,3.69,3.19,20,1,0,4,2),
(4,21,6,160,110,3.9,2.875,17.02,0,1,4,4),
(5,17.8,6,167.6,123,3.92,3.44,18.9,1,0,4,4),
(6,16.4,8,275.8,180,3.078,4.07,17.4,0,0,3,3),
(7,22.8,4,108,93,3.85,2.32,18.61,1,1,4,1),
(8,17.3,8,275.8,180,3.078,3.73,17.6,0,0,3,3),
(9,21.4,6,258,110,3.08,3.215,19.44,1,0,3,1),
(10,15.2,8,275.8,180,3.078,3.78,18,0,0,3,3),
(11,18.1,6,225,105,2.768,3.46,20.22,1,0,3,1),
(12,32.4,4,78.7,66,4.08,2.20,19.47,1,1,4,1),
(13,14.3,8,360,245,3.21,3.578,15.84,0,0,3,4),
(14,22.8,4,140.8,95,3.92,3.15,22.9,1,0,4,2),
(15,30.4,4,75.7,52,4.93,1.615,18.52,1,1,4,2),
(16,19.2,6,167.6,123,3.92,3.44,18.3,1,0,4,4),
(17,33.9,4,71.14,65,4.22,1.835,19.9,1,1,4,1),
(18,15.2,8,304,150,3.15,3.435,17.3,0,0,3,2),
(19,10.4,8,472,205,2.93,5.25,17.98,0,0,3,4),
(20,27.3,4,79,66,4.08,1.935,18.9,1,1,4,1),
(21,10.4,8,460,215,3,5.424,17.82,0,0,3,4),
(22,26,4,120.3,91,4.43,2.14,16.7,0,1,5,2),
(23,14.7,8,440,230,3.23,5.345,17.42,0,0,3,4),
(24,30.4,4,95.14,113,3.77,1.513,16.9,1,1,5,2),
(25,21.5,4,120.1,97,3.70,2.465,20.01,1,0,3,1),
(26,15.8,8,351,264,4.22,3.17,14.5,0,1,5,4),
(27,15.5,8,318,150,2.768,3.52,16.87,0,0,3,2),
(28,15,8,301,335,3.54,3.578,14.6,0,1,5,8),
(29,13.3,8,350,245,3.73,3.84,15.41,0,0,3,4),
(30,19.2,8,400,175,3.08,3.845,17.05,0,0,3,2),
(31,19.7,6,145,175,3.62,2.77,15.5,0,1,5,6),
(32,21.4,4,121,109,4.11,2.78,18.6,1,1,4,2);
</pre></li>
<li>Run Random Forest train function. <pre class="example">
DROP TABLE IF EXISTS mt_cars_output, mt_cars_output_group, mt_cars_output_summary;
SELECT madlib.forest_train('mt_cars',
'mt_cars_output',
'id',
'mpg',
'*',
'id, hp, drat, am, gear, carb', -- exclude columns
'am',
10::integer,
2::integer,
TRUE::boolean,
1,
10,
8,
3,
10
);
\x on
SELECT * FROM mt_cars_output_summary;
SELECT * FROM mt_cars_output_group;
\x off
</pre></li>
<li>Display a single tree of the Random Forest in dot format. <pre class="example">
SELECT madlib.get_tree('mt_cars_output',1,1);
</pre> Result: <pre class="result">
digraph "Regression tree for mt_cars" {
"0" [label="28.8444",shape=box];
} //---end of digraph---------
</pre></li>
<li>Predict regression output for the same data and compare with original. <pre class="example">
DROP TABLE IF EXISTS prediction_results;
SELECT madlib.forest_predict('mt_cars_output',
'mt_cars',
'prediction_results',
'response');
SELECT am, id, estimated_mpg, mpg
FROM prediction_results JOIN mt_cars USING (id)
ORDER BY am, id;
</pre> Result: <pre class="result">
am | id | estimated_mpg | mpg
----+----+------------------+------
0 | 1 | 15.893525974026 | 18.7
0 | 3 | 21.5238492063492 | 24.4
0 | 5 | 20.0175396825397 | 17.8
0 | 6 | 14.8406818181818 | 16.4
0 | 8 | 14.8406818181818 | 17.3
0 | 9 | 20.0496825396825 | 21.4
0 | 10 | 14.4012272727273 | 15.2
0 | 11 | 20.0175396825397 | 18.1
0 | 13 | 15.0162878787879 | 14.3
0 | 14 | 21.5238492063492 | 22.8
0 | 16 | 20.0175396825397 | 19.2
0 | 18 | 15.4787532467532 | 15.2
0 | 19 | 14.4272987012987 | 10.4
0 | 21 | 14.4272987012987 | 10.4
0 | 23 | 14.8667532467532 | 14.7
0 | 25 | 21.5238492063492 | 21.5
0 | 27 | 15.281525974026 | 15.5
0 | 29 | 15.0162878787879 | 13.3
0 | 30 | 15.281525974026 | 19.2
1 | 2 | 20.6527393162393 | 21
1 | 4 | 20.6527393162393 | 21
1 | 7 | 22.7707393162393 | 22.8
1 | 12 | 27.0888266178266 | 32.4
1 | 15 | 28.2478650793651 | 30.4
1 | 17 | 28.2478650793651 | 33.9
1 | 20 | 28.2478650793651 | 27.3
1 | 22 | 23.8401984126984 | 26
1 | 24 | 26.9748650793651 | 30.4
1 | 26 | 20.6527393162393 | 15.8
1 | 28 | 20.6527393162393 | 15
1 | 31 | 20.6527393162393 | 19.7
1 | 32 | 22.7707393162393 | 21.4
</pre></li>
</ol>
<p><a class="anchor" id="related"></a></p><dl class="section user"><dt>Related Topics</dt><dd></dd></dl>
<p>File <a class="el" href="random__forest_8sql__in.html">random_forest.sql_in</a> documenting the training function</p>
<p><a class="el" href="group__grp__decision__tree.html">Decision Tree</a></p>
</div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="footer">Generated on Tue Sep 20 2016 11:27:01 for MADlib by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.10 </li>
</ul>
</div>
</body>
</html>