Loan classification using a deep feed-forward neural network
https://doi.org/10.29235/1561-2430-2025-61-2-159-174
Abstract
A deep feed-forward neural network model is developed and analyzed in this article to solve the financial loan classification problem. Using this model, based on historical data on previously issued loans, the values of the following traditional machine learning metrics that determine the quality of forecasting are calculated: cost function, truth, accuracy, completeness and F1 measure. In order to obtain greater forecasting accuracy, optimization methods of mini-batch gradient descent, gradient descent with momentum, adaptive momentum estimation, and zero-level elimination method were used. An improved structure of the proposed neural network was determined, the impact of the so-called He initialization on the final result was analyzed, as well as the efficiency of using specific optimization algorithms. The study showed that the use of deep feed-forward neural network is reasonable in developing loan classifiers.
About the Authors
U. I. BehunkouBelarus
Uladzimir I. Behunkou – Master of Engineering
6, Surganov Str., 220012, Minsk
M. Y. Kovalyov
Belarus
Mikhail Y. Kovalyov – Dr. Sc. (Physics and Mathematics), Professor
6, Surganov Str., 220012, Minsk
References
1. Behunkou U. I., Kovalyov M. Y. Loan classification using logistic regression. Informatics, 2023, vol. 20, no. 1, pp. 55–74 (in Russian). https://doi.org/10.37661/1816-0301-2023-20-1-55-74
2. Behunkou U. I. Loan classification using a feed-forward neural network. Informatics, 2024, vol. 21, no. 1, pp. 83–104 (in Russian). https://doi.org/10.37661/1816-0301-2024-21-1-83-104
3. Lessmann S., Baesens B., Seow H.-V., Thomas L. C. Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research, 2015, vol. 247, no. 1, pp. 124–136. https://doi.org/10.1016/j.ejor.2015.05.030
4. Shalev-Shwartz S., Ben-David S. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. 397 p. https://doi.org/10.1017/CBO9781107298019.
5. Rumelhart D., Hinton G., Williams R. Learning representations by back-propagating errors. Nature, 1986, vol. 323, pp. 533–536. https://doi.org/10.1038/323533a0
6. Geron A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow. 2nd ed. O’Reilly Media, 2019. 483 p.
7. Goodfellow I., Bengio Y., Courville A. Deep Learning. MIT Press, 2016. 800 p.
8. Glorot X., Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Vol. 9. 2010, pp. 249–256.
9. He K., Zhang X., Ren S., Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1026–1034. https://doi.org/10.1109/iccv.2015.123
10. LeCun Y., Bottou L., Orr G. B., Müller K.-R. Efficient BackProp. Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol 1524. Berlin, Heidelberg, Springer, 1998, pp. 9–50. https://doi.org/10.1007/3-540-49430-8_2
11. Roberts S. W. Control chart tests based on geometric moving averages. Technometrics, 1958, vol. 1, no. 3, pp. 239– 250. https://doi.org/10.1080/00401706.1959.10489860
12. Kingma D. P., Ba J. Adam: A Method for Stochastic Optimization. Arxiv [Preprint], 2010. Available at: https://arxiv.org/abs/1412.6980; https://doi.org/10.48550/arXiv.1412.6980
13. Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, vol. 15, no. 1, pp. 1929–1958.