Skip to main content
Fig. 4 | Brain Informatics

Fig. 4

From: Nonlinear reconfiguration of network edges, topology and information content during an artificial learning task

Fig. 4

Unravelling the manifold: low-dimensional projections of feed-forward neural network activity during MNIST training reveal category-specific untangling. A The first three principal components (eigenvalues 1–3: λ1/λ2/λ3) of the input nodes; B the percentage of variance explained by EV1, when the PCA was fit on data from each training epoch separately; C 3D scatter plot of the items from the training set during three different periods: during the early period (Epochs 1–10), the topological embedding of the different digits showed substantial overlap, which is reflected in the low between-category distance (i.e., distance between mean of each digit); in the middle period (Epochs 11–300), the embedding showed a relative expansion in the low-dimensional space; and during the late period (Epochs 300+), the distance within each category dropped dramatically; D 3D scatter plot of between-category and within-category distance, along with training accuracy—note that maximal accuracy is associated with increases in both within- and between-category distance

Back to article page