# --------------------------------------------
# CITATION file created with {cffr} R package
# See also: https://docs.ropensci.org/cffr/
# --------------------------------------------
cff-version: 1.2.0
message: 'To cite package "tree.interpreter" in publications use:'
type: software
license: MIT
title: 'tree.interpreter: Random Forest Prediction Decomposition and Feature Importance
Measure'
version: 0.1.1
identifiers:
- type: doi
value: 10.32614/CRAN.package.tree.interpreter
abstract: An R re-implementation of the 'treeinterpreter' package on PyPI .
Each prediction can be decomposed as 'prediction = bias + feature_1_contribution
+ ... + feature_n_contribution'. This decomposition is then used to calculate the
Mean Decrease Impurity (MDI) and Mean Decrease Impurity using out-of-bag samples
(MDI-oob) feature importance measures based on the work of Li et al. (2019) .
authors:
- family-names: Sun
given-names: Qingyao
email: sunqingyao19970825@gmail.com
preferred-citation:
type: article
title: A Debiased MDI Feature Importance Measure for Random Forests
authors:
- family-names: Li
given-names: Xiao
- family-names: Wang
given-names: Yu
- family-names: Basu
given-names: Sumanta
- family-names: Kumbier
given-names: Karl
- family-names: Yu
given-names: Bin
url: http://arxiv.org/abs/1906.10845
abstract: Tree ensembles such as Random Forests have achieved impressive empirical
success across a wide variety of applications. To understand how these models
make predictions, people routinely turn to feature importance measures calculated
from tree ensembles. It has long been known that Mean Decrease Impurity (MDI),
one of the most widely used measures of feature importance, incorrectly assigns
high importance to noisy features, leading to systematic bias in feature selection.
In this paper, we address the feature selection bias of MDI from both theoretical
and methodological perspectives. Based on the original definition of MDI by Breiman
et al. for a single tree, we derive a tight non-asymptotic bound on the expected
bias of MDI importance of noisy features, showing that deep trees have higher
(expected) feature selection bias than shallow ones. However, it is not clear
how to reduce the bias of MDI using its existing analytical expression. We derive
a new analytical expression for MDI, and based on this new expression, we are
able to propose a debiased MDI feature importance measure using out-of-bag samples,
called MDI-oob. For both the simulated data and a genomic ChIP dataset, MDI-oob
achieves state-of-the-art performance in feature selection from Random Forests
for both deep and shallow trees.
date-accessed: '2019-10-18'
journal: arXiv:1906.10845 [cs, stat]
month: '6'
year: '2019'
notes: 'arXiv: 1906.10845'
keywords:
- Statistics - Machine Learning
- Computer Science - Machine Learning
repository: https://nalzok.r-universe.dev
repository-code: https://github.com/nalzok/tree.interpreter
commit: 0a04a7a790aa128141fc3592ac70da20d90633d4
url: https://github.com/nalzok/tree.interpreter
date-released: '2020-01-28'
contact:
- family-names: Sun
given-names: Qingyao
email: sunqingyao19970825@gmail.com