Skip to main content
Fig. 1 | Brain Informatics

Fig. 1

From: Nonlinear reconfiguration of network edges, topology and information content during an artificial learning task

Fig. 1

A feed-forward neural network exhibits three topologically distinct periods of reconfiguration throughout learning the MNIST dataset. A A large (60,000 item) corpus of hand-drawn digits from the MNIST database (28 × 28 pixel array with 256 intensity values per pixel) was vectorized and entered into a generic feed-forward neural network with two hidden layers—a 100-node layer (HL1) that received the 28 × 28 input and a 100-node layer (HL2) that received the input from HL1—and a 10-node output layer (argmax); B the edges connecting the input → HL1 (dark blue; α), HL1 → HL2 (orange; \(\beta\)) and HL2 → output (dark green; \(\gamma\)) were embedded within an asymmetric weighted and signed connectivity matrix; C classification accuracy increased rapidly in the early stages of training, with an asymptote after ~ 100 training epochs; D network modularity (Q) was naturally grouped into three separate periods: an early period (light blue; epochs 1–14) that was relatively static, a middle period (light green; epochs 15–700) with a rapid increase in Q, and a late period (light purple; epochs 800–10,000) in which Q diminished, albeit not to initial levels. E classification accuracy showed a non-linear relationship with Q: initial increases in accuracy were independent of Q (light blue), after which there was a positive linear relationship between accuracy and Q (Pearson’s r = 0.981; light green), and finally a sustained drop in Q, as accuracy saturated in the later periods of learning (light purple). For clarity, only a subset of the 100,000 epochs are presented here

Back to article page