Μετάβαση στο περιεχόμενο

Improving the Accuracy of Neural Network Pattern Recognition by Fractional Gradient Descent

In this paper we propose the fractional gradient descent for increasing the training and work of modern neural networks. This optimizer searches the global minimum of the loss function considering the fractional gradient directions achieved by Riemann-Liouville, Caputo, and Grunwald-Letnikov derivat...

Πλήρης περιγραφή

Αποθηκεύτηκε σε:
Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριοι συγγραφείς: Abdulkadirov, R. I., Абдулкадиров, Р. И., Lyakhov, P. A., Ляхов, П. А., Baboshina, V. A., Бабошина, В. А., Nagornov, N. N., Нагорнов, Н. Н.
Μορφή: Статья
Γλώσσα:English
Έκδοση: Institute of Electrical and Electronics Engineers Inc. 2024
Θέματα:
Διαθέσιμο Online:https://dspace.ncfu.ru/handle/123456789/29336
Ετικέτες: Προσθήκη ετικέτας
Δεν υπάρχουν, Καταχωρήστε ετικέτα πρώτοι!
Περιγραφή
Περίληψη:In this paper we propose the fractional gradient descent for increasing the training and work of modern neural networks. This optimizer searches the global minimum of the loss function considering the fractional gradient directions achieved by Riemann-Liouville, Caputo, and Grunwald-Letnikov derivatives. The adjusting of size and direction of the fractional gradient, supported by momentum and Nesterov condition, let the proposed optimizer descend into the global minimum of loss functions of neural networks. Utilizing the proposed optimization algorithm in a linear neural network and a visual transformer lets us attain higher accuracy, precision, recall, Macro F1 score by 1.8-4 percentage points than known analogs than state-of-the-art methods in solving pattern recognition problems on images from MNIST and CIFAR10 datasets. Further research of fractional calculus in modern neural network methodology can improve the quality of solving various challenges such as pattern recognition, time series forecasting, moving object detection, and data generation.