Random forest
{{#invoke:Hatnote|hatnote}}
Machine learning and data mining |
---|
![]() |
Problems |
Template:Longitem |
Clustering |
Dimensionality reduction |
Structured prediction |
Anomaly detection |
ANN |
Theory |
Template:Navbar |
Random forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random forests correct for decision trees' habit of overfitting to their training set.
The algorithm for inducing a random forest was developed by Leo Breiman[1] and Adele Cutler,[2] and "Random Forests" is their trademark. The method combines Breiman's "bagging" idea and the random selection of features, introduced independently by Ho[3][4] and Amit and Geman[5] in order to construct a collection of decision trees with controlled variance.
The selection of a random subset of features is an example of the random subspace method, which, in Ho's formulation, is a way to implement classification proposed by Eugene Kleinberg.[6]
History
The early development of random forests was influenced by the work of Amit and Geman[5] who introduced the idea of searching over a random subset of the available decisions when splitting a node, in the context of growing a single tree. The idea of random subspace selection from Ho[4] was also influential in the design of random forests. In this method a forest of trees is grown, and variation among the trees is introduced by projecting the training data into a randomly chosen subspace before fitting each tree. Finally, the idea of randomized node optimization, where the decision at each node is selected by a randomized procedure, rather than a deterministic optimization was first introduced by Dietterich.[7]
The introduction of random forests proper was first made in a paper by Leo Breiman.[1] This paper describes a method of building a forest of uncorrelated trees using a CART like procedure, combined with randomized node optimization and bagging. In addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular:
- Using out-of-bag error as an estimate of the generalization error.
- Measuring variable importance through permutation.
The report also offers the first theoretical result for random forests in the form of a bound on the generalization error which depends on the strength of the trees in the forest and their correlation.
Algorithm
Preliminaries: decision tree learning
Decision trees are a popular method for various machine learning tasks. Tree learning "come[s] closest to meeting the requirements for serving as an off-the-shelf procedure for data mining", say Hastie et al., because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. However, they are seldom accurate.[8]:352
In particular, trees that are grown very deep tend to learn highly irregular patterns: they overfit their training sets, because they have low bias, but very high variance. Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance.[8]:587–588 This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance of the final model.
Tree bagging
{{#invoke:main|main}} The training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners. Given a training set Template:Mvar = Template:Mvar, …, Template:Mvar with responses Template:Mvar = Template:Mvar, …, Template:Mvar, bagging repeatedly selects a random sample with replacement of the training set and fits trees to these samples:
- For Template:Mvar = 1, …, Template:Mvar:
- Sample, with replacement, Template:Mvar training examples from Template:Mvar, Template:Mvar; call these Template:Mvar, Template:Mvar.
- Train a decision or regression tree Template:Mvar on Template:Mvar, Template:Mvar.
After training, predictions for unseen samples Template:Mvar can be made by averaging the predictions from all the individual regression trees on Template:Mvar:
or by taking the majority vote in the case of decision trees.
This bootstrapping procedure leads to better model performance because it decreases the variance of the model, without increasing the bias. This means that while the predictions of a single tree are highly sensitive to noise in its training set, the average of many trees is not, as long as the trees are not correlated. Simply training many trees on a single training set would give strongly correlated trees (or even the same tree many times, if the training algorithm is deterministic); bootstrap sampling is a way of de-correlating the trees by showing them different training sets.
The number of samples/trees, Template:Mvar, is a free parameter. Typically, a few hundred to several thousand trees are used, depending on the size and nature of the training set. An optimal number of trees Template:Mvar can be found using cross-validation, or by observing the out-of-bag error: the mean prediction error on each training sample Template:Mvar, using only the trees that did not have Template:Mvar in their bootstrap sample.[9] The training and test error tend to level off after some number of trees have been fit.
From bagging to random forests
{{#invoke:main|main}} The above procedure describes the original bagging algorithm for trees. Random forests differ in only one way from this general scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called "feature bagging". The reason for doing this is the correlation of the trees in an ordinary bootstrap sample: if one or a few features are very strong predictors for the response variable (target output), these features will be selected in many of the Template:Mvar trees, causing them to become correlated.
Typically, for a dataset with Template:Mvar features, Template:Sqrt features are used in each split.
Extensions
Adding one further step of randomization yields extremely randomized trees, or ExtraTrees. These are trained using bagging and the random subspace method, like in an ordinary random forest, but additionally the top-down splitting in the tree learner is randomized. Instead of computing the locally optimal feature/split combination (based on, e.g., information gain or the Gini coefficient), for each feature under consideration a random value is selected in the feature's empirical range (in the tree's training set, i.e., the bootstrap sample). The best of these is then chosen as the split.[10]
Properties
Variable importance
Random forests can be used to rank the importance of variables in a regression or classification problem in a natural way. The following technique was described in Breiman's original paper[1] and is implemented in the R package randomForest.[2]
The first step in measuring the variable importance in a data set is to fit a random forest to the data. During the fitting process the out-of-bag error for each data point is recorded and averaged over the forest (errors on an independent test set can be substituted if bagging is not used during training).
To measure the importance of the -th feature after training, the values of the -th feature are permuted among the training data and the out-of-bag error is again computed on this perturbed data set. The importance score for the -th feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences.
Features which produce large values for this score are ranked as more important than features which produce small values.
This method of determining variable importance has some drawbacks. For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Methods such as partial permutations[11][12] and growing unbiased trees[13] can be used to solve the problem. If the data contain groups of correlated features of similar relevance for the output, then smaller groups are favored over larger groups.[14]
Relationship to nearest neighbors
A relationship between random forests and the [[K-nearest neighbor algorithm|Template:Mvar-nearest neighbor algorithm]] (Template:Mvar-NN) was pointed out by Lin and Jeon in 2002.[15] It turns out that both can be viewed as so-called weighted neighborhoods schemes. These are models built from a training set that make predictions for new points Template:Mvar by looking at the "neighborhood" of the point, formalized by a weight function Template:Mvar:
Here, is the non-negative weight of the Template:Mvar'th training point relative to the new point Template:Mvar. For any particular Template:Mvar, the weights must sum to one. Weight functions are given as follows:
- In Template:Mvar-NN, the weights are if Template:Mvar is one of the Template:Mvar points closest to Template:Mvar, and zero otherwise.
- In a tree, is the fraction of the training data that falls into the same leaf as Template:Mvar.
Since a forest averages the predictions of a set of Template:Mvar trees with individual weight functions , its predictions are
This shows that the whole forest is again a weighted neighborhood scheme, with weights that average those of the individual trees. The neighbors of Template:Mvar in this interpretation are the points which fall in the same leaf as Template:Mvar in at least one tree of the forest. In this way, the neighborhood of Template:Mvar depends in a complex way on the structure of the trees, and thus on the structure of the training set. Lin and Jeon show that the shape of the neighborhood used by a random forest adapts to the local importance of each feature.[15]
Unsupervised learning with random forests
As part of their construction, RF predictors naturally lead to a dissimilarity measure between the observations. One can also define an RF dissimilarity measure between unlabeled data: the idea is to construct an RF predictor that distinguishes the “observed” data from suitably generated synthetic data.[1][16] The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. An RF dissimilarity can be attractive because it handles mixed variable types well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The RF dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the "Addcl 1" RF dissimilarity weighs the contribution of each variable according to how dependent it is on other variables. The RF dissimilarity has been used in a variety application, e.g. to find clusters of patients based on tissue marker data.[17]
Variants
Instead of decision trees, linear models have been proposed and evaluated as base estimators in random forests, in particular multinomial logistic regression and naive Bayes classifiers.[18][19]
See also
- Decision tree learning
- Gradient boosting
- Randomized algorithm
- Bootstrap aggregating (bagging)
- Ensemble learning
- Boosting
- Non-parametric statistics
References
- ↑ 1.0 1.1 1.2 1.3 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ 2.0 2.1 Template:Cite web
- ↑ {{#invoke:citation/CS1|citation |CitationClass=conference }}
- ↑ 4.0 4.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ 5.0 5.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ 8.0 8.1 Template:ElemStatLearn
- ↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
- ↑ Template:Cite doi
- ↑ {{#invoke:citation/CS1|citation |CitationClass=conference }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ 15.0 15.1 Template:Cite techreport
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
- ↑ Prinzie, A., Van den Poel, D. (2007). Random Multiclass Classification: Generalizing Random Forests to Random MNL and Random NB, Dexa 2007, Lecture Notes in Computer Science, 4653, 349–358.
External links
- Random Forests classifier description (Site of Leo Breiman)
- Liaw, Andy & Wiener, Matthew "Classification and Regression by randomForest" R News (2002) Vol. 2/3 p. 18 (Discussion of the use of the random forest package for R)
- Ho, Tin Kam (2002). "A Data Complexity Analysis of Comparative Advantages of Decision Forest Constructors". Pattern Analysis and Applications 5, p. 102-112 (Comparison of bagging and random subspace method)
- {{#invoke:Citation/CS1|citation
|CitationClass=journal }}
- C# implementation of random forest algorithm for categorization of text documents supporting reading of documents, making dictionaries, filtering stop words, stemming, counting words, making document-term matrix and its usage for building random forest and further categorization.
- A python implementation of the random forest algorithm working in regression, classification with multi-output support.