DOCUMENTATION

To compute the largest positive value of an integer, the routines first save the greatest power of 2 to not cause a change of sign (or a zero value in an unsigned field).  Then, they fill all of the lower-order bits with 1 by adding an amount that is one less than the value of the high-order bit.

To compute the largest negative value of an integer, the routines save the greatest power of -2 to not cause a change of sign.

To compute the largest and smallest positive values of a floating-point number, note that the significand and exponent are separate binary numbers, and that the exponent represents a power of 2, not 10.