Skip to main content

Table 1 Final tuning parameter of LSTM model

From: Hemodynamic functional connectivity optimization of frequency EEG microstates enables attention LSTM framework to classify distinct temporal cortical communications of different cognitive tasks

Hyperparameters

Tuned parameters

Hidden layer size

256

Batch size

64

Training epoch numbers

1000

Rate dropout

Input Layer: 0, 1st LSTM Layer: 0.2, 2nd LSTM Layer: 0.1, 3rd LSTM layer: 0.2

Recurrent depth

3

Learning rate

0.001