The idea of computing by recursion is as old as counting itself. It occurred in primitive form in the eﬀorts of the Babylonians as early as 2000 B.C. to extract roots and in more explicit form around 450 B.C. in the Pythagoreans’ study of ﬁgurative numbers, since in modern notation the triangular numbers satisfy the diﬀerence equation

t(n) = t(n− 1) + n, the square numbers the equation

s(n) = s(n− 1) + n2, and so forth. The Pythagoreans also used a system of diﬀerence equations

x(n) = x(n− 1) + 2y(n− 1), y(n) = x(n− 1) + y(n− 1) to generate large solutions of Pell’s equation,

x2 − 2y2 = 1, and thereby approximations of

√ 2. In his attempts to compute the circumference of a circle,

Archimedes (about 250 B.C.) employed equations of the form

P (2n) = 2p(n)P (n)

p(n) + P (n) , p(2n) =

√ p(n)P (2n)

to compute the perimeters P (n) and p(n) of the circumscribed polygon of n sides and the inscribed polygon of n sides, respectively. Other familiar ancient discoveries about recurrence include the Euclidean algorithm and Zeno’s paradox. Euclid also studied geometric series, although the general form of the sum was not obtained until around 1593 by Vieta. About 1202, Fibonacci formulated his famous rabbit problem that led to the Fibonacci sequence

1, 1, 2, 5, 8, 13, . . .