Comparative Analysis of Fast Matrix Multiplication Methods on Different Datatypes
The main problem of artificial intelligence is increasing productivity and quality of problem solutions. Due to the growing architecture of modern neural networks, one needs to engage advanced mathematical methods. Deep learning models use more hardware resources, which increases the computational c...
Сохранить в:
| Главные авторы: | , , , , , , , |
|---|---|
| Формат: | Статья |
| Язык: | English |
| Опубликовано: |
Springer Science and Business Media Deutschland GmbH
2024
|
| Темы: | |
| Online-ссылка: | https://dspace.ncfu.ru/handle/123456789/29206 |
| Метки: |
Добавить метку
Нет меток, Требуется 1-ая метка записи!
|
| Краткое описание: | The main problem of artificial intelligence is increasing productivity and quality of problem solutions. Due to the growing architecture of modern neural networks, one needs to engage advanced mathematical methods. Deep learning models use more hardware resources, which increases the computational complexity. Therefore, it is necessary to apply modifications of machine learning models at a fundamental level using alternative matrix multiplication methods. This article proposes a comparative analysis of the computational complexity of matrix multiplication implemented by the standard Strassen and Strassen-Winograd methods. We consider data time complexity for int32, int64, float32, and float64 data types. In addition, the number of recursions for each matrix size is determined. According to the experimental results, we can conclude that the Strassen-Winograd matrix multiplication method has minimal time costs compared to the Strassen method and standard approaches by 3%–6% and 30%–40%, respectively. It is possible to incorporate such an approach into convolutional, spike, and auto-encoding layers. |
|---|