Visualizing Amdahl’s Law

Consider a single-core processor machine. A set of instructions (a program) takes 100 minutes to execute completely on this machine. A subset (say 80%) of these instructions can be parallelized (concurrent instructions) and run simultaneously on multiple machines. Now, increasing the number of machines can only improve the execution time of this subset of instructions. However, the remaining (20%) unparallelizable code (sequential instructions) has to run on a single-core processor machine. Thus, no matter how many more processors or machines (resources) are added, the total execution time of the entire program can never be less than the time it takes for the sequential instructions to execute (20 minutes). Amdahl’s Law encapsulates this concept as a mathematical formula to estimate the theoretical increase in the speedup of the execution of a program upon increasing the resources of the machine on which it runs.

If a fraction f of a program is parallelizable, the speedup s achieved by running the program by increasing resources only affects this fraction f. The remaining 1 - f is not affected. Thus, the theoretical increase in the speedup of the entire program is given by

    \begin{equation*}    S=\frac{1}{(1 - f) + \frac{f}{s}} \end{equation*}

Consider a program that requires 16 units of time to execute on a single-core processor machine. The 4 units of red are sequential instructions and the remaining 12 units of green are parallelizable concurrent instructions.

    \begin{equation*} f = \frac{12 \text{ green}}{4 \text{ red} + 12 \text{ green}} = \frac{3}{4} = 0.75 \end{equation*}

If another processor is added, the concurrent instructions can be split across these two processors and executed simultaneously. The light green units indicate the instructions being run in parallel with their green counterpart on different processors.

The same program now requires only 4 red + 6 green = 10 units of time, which gives a speedup of

    \begin{equation*} \frac{16}{10} = 1.6 \end{equation*}

This is verified by the formula

    \begin{equation*}    S=\frac{1}{(1 - 0.75) + \frac{0.75}{2}} = 1.6 \end{equation*}

Adding more processors gives the following results

For the above configuration, the execution time has reduced to 4 red + 1 green = 5 units of time. In the extreme case, if theoretically there were infinite processors to run the concurrent instructions, the execution time will be at least 4 (red) units of time. Thus, the amount of sequential instructions in a program limits the improvement in execution time achieved by adding more resources.

For a fixed f, the continuous distribution of the execution time of a program by increasing the number of processors will look something like below. The total execution time is asymptotically bounded by the sequential execution time.

Leave a Reply

Your email address will not be published. Required fields are marked *