Skip to main content
Fig. 3 | Brain Informatics

Fig. 3

From: How Amdahl’s Law limits the performance of large artificial neural networks

Fig. 3

Contributions \((1-\alpha _{\mathrm{eff}}^X)\) to \((1-\alpha _{\mathrm{eff}}^{\mathrm{total}})\) and max payload performance \(R_{\mathrm{Max}}\) of a fictive supercomputer (\(P=1\,Gflop/s\) @ \(1\,\)GHz) in function of the nominal performance. The blue diagram line refers to the right-hand scale (\(R_{\mathrm{Max}}\) values), all others (\((1-\alpha _{\mathrm{eff}}^{X})\) contributions) to the left scale. The top left figure illustrates the behavior measured with benchmark HPL. The looping contribution becomes remarkable around 0.1 Eflops, and breaks down payload performance when approaching 1 Eflops. The black dot marks the HPL performance of the computer used in works [4, 36]. In the top right, the behavior measured with benchmark HPCG is displayed. In this case, the contribution of the application (thin brown line) is much higher, the looping contribution (thin green line) is the same as above. As a consequence, the achievable payload performance is lower and also the breakdown of the performance is softer. The black dot marks the HPCG performance of the same computer. The bottom figure demonstrates what happens if the clock cycle is 5000 times longer: it causes a drastic decrease in the achievable performance and strongly shifts the performance breakdown toward lower nominal performance values. The figure is purely illustrating the concepts; the displayed numbers are somewhat similar to the real ones

Back to article page