Advanced Decision Making and Interpretability through Neural Shrubs
Advanced decision making using machine learning should be both accurate and interpretable. Many standard machine learning techniques suffer from an inherent lack of transparency with regard to how the resulting decision was made. In the current work, we aim to overcome this issue by introducing a hybrid learning approach using classical decision trees alongside artificial neural networks, dubbed a neural shrub. The Neural Shrub methodology presented in this paper aims to maintain as much interpretability as possible without sacrificing either classification or regression accuracy. Experimental results are presented on several benchmark data sets to validate the proposed approach as well as provide insight into future research directions.