The UNIX load average gives 3 numbers over 1/5/15 minute time intervals. It’s supposed to be an indicator of how busy a UNIX machine is. The global load average is an exponentially decaying average of number of runnable/uninteruptable tasks.

I’m interested to implement such kind of exponential decay algorithm in my project for a slightly different purpose, but also to take averages. So I did some research how exactly this load average is computed in UNIX.

Nowadays the implementation is quite involved for various reasons. Since I’m mostly interested in the exponential decay part when computing the average, I found a simpler description of the original implementation:

```
#define FSHIFT 11 /* nr of bits of precision */
#define FIXED_1 (1<<FSHIFT) /* 1.0 as fixed-point */
#define LOAD_FREQ (5*HZ) /* 5 sec intervals */
#define EXP_1 1884 /* 1/exp(5sec/1min) fixed-pt */
#define EXP_5 2014 /* 1/exp(5sec/5min) */
#define EXP_15 2037 /* 1/exp(5sec/15min) */
#define CALC_LOAD(load,exp,n)
load *= exp;
load += n*(FIXED_1-exp);
load >>= FSHIFT;
```

From http://perfdynamics.blogspot.com/2014/06/load-average-in-freebsd.html, where they say:

The CALC_LOAD macro is updated (internally) every 5 seconds and the m-index in eqn.(1) refers to the weight associated with each of the 1, 5, or 15 minute averaging windows.

What I don’t understand: If there is a loop, which on each iteration gets `n`

, the number of tasks, every 5 seconds, then how do the exponential decay constants here help to take the load average of 1 minute, 5 minutes or 15 minutes?

A simple example that could be calculated with pen & paper would be really useful.