Granting a short-term loan is a critical decision. A great deal of research hasconcerned the prediction of credit default, notably through Machine Learning(ML) algorithms. However, given that their black-box nature has sometimes led tounwanted outcomes, comprehensibility in ML guided decision-making strategies hasbecome more important. In many domains, transparency and accountability are nolonger optional. In this article, instead of opposing white-box againstblack-box models, we use a multi-step procedure that combines the Fast andFrugal Tree (FFT) methodology of Martignon et al. (2005) and Phillips et al.(2017) with the extraction of post-hoc explainable informationfrom ensemble ML models. New interpretable models are then built thanks to theinclusion of explainable ML outputs chosen by human intervention. Ourmethodology improves significantly the accuracy of the FFT predictions whilepreserving their explainable nature. We apply our approach to a dataset ofshort-term loans granted to borrowers in the UK, and show how complex machinelearning can challenge simpler machines and help decision makers.