Symbolic regression outperforms other models for small data sets

Casper Wilstrup, Jaan Kasak

Machine learning is often applied to obtain predictions and new understanding of complex phenomena and relationships, but availability of sufficient data for model training is a widespread problem. Traditional machine learning techniques such as random forests and gradient boosting tend to overfit when working with data sets of a few hundred samples. This study demonstrates that for small training sets of 250 observations, symbolic regression is a superior alternative to these machine learning models by providing better accuracy while preserving the interpretability of linear models and decision trees. In 132 out of 240 cases, the symbolic regression model performsbetter than any of the other models on the out-of-sample data. The second best algorithm was found to be a random forest, which performs best in 37 of the 240 cases. When restricting the comparison to interpretable models,symbolic regression performs best in 184 out of 240 cases.

Knowledge Graph



Sign up or login to leave a comment