Numpy how to deal with very slow training in keras?

Numpy how to deal with very slow training in keras?,numpy,memory,keras,wavelet,pyhook,Numpy,Memory,Keras,Wavelet,Pyhook,The story I have a data-set of ECG signal recordings which is shaped (162 patient,65635 sample), and I got the continuous wavelet transform of these recording so that the result is shaped(162 patient,65635 sample, 80 coefficient) which is very large to fit in memory (40 MB) so I saved each instance of these as .npz matrix and used keras generators in training, I use LSTM, and convolution layrs and CPU and the training is very slow. Questions what are the best strategies to deal with this

The story I have a data-set of ECG signal recordings which is shaped (162 patient,65635 sample), and I got the continuous wavelet transform of these recording so that the result is shaped(162 patient,65635 sample, 80 coefficient) which is very large to fit in memory (40 MB) so I saved each instance of these as .npz matrix and used keras generators in training, I use LSTM, and convolution layrs and CPU and the training is very slow.

Questions

  1. what are the best strategies to deal with this problem?

  2. how to decrease the size of the coefficient matrix resulting from cwt?


#1

Instead of loading the entire dataset onto memory, how about streaming portions of data on the go using something like an ImageDataGeneator? Also, note that using CPU to train deep neural networks take a lot of time. If you want to prioritize speed, use cloud platforms such as AWS which make use of GPU computing power.

#2

I use a custom keras generator to load the data as patches, and it turned out to be that the long sequence ( 65635 LSTM) is the main cause of this slow down