Fibonacci growth rates

 

“From this equation, it is easy to see why the Fibonacci numbers grow at a rate of roughly \left(\frac{1 + \sqrt{5}}{2}\right)^{t}.”

Wait, what? I squinted at the textbook again. You can find a growth rate for the fibonacci sequence? Ever since elementary school, all fibonacci numbers had ever meant to me was F(t) = F(t - 1) + F(t - 2) for natural number t. To calculate the next one, add the previous two together. But no. Apparently – obviously – there was more to the elegant but opaque definition.

Let’s begin. The first issue to tackle is the fact that calculating a fibonacci number requires two inputs – the previous two fibonacci numbers. We’d like to only take one input into our restated function for convenience. The solution is to take an input vector. It has entries F(t - 1) and F(t - 2), the two numbers we’d need to calculate F(t); the output has entries F(t) and F(t - 1).

Now we’re on our way to representing the “rule” that gets us from one fibonacci number to the next with a matrix-vector equation. To get from \begin{bmatrix} F(t - 1) \\ F(t - 2) \end{bmatrix} to \begin{bmatrix} F(t) \\ F(t - 1) \end{bmatrix}, we copy F(t - 1) from the top of first vector to the bottom of the second vector, and then sum F(t - 1) and F(t - 2) from the first vector to become F(t) in the second vector. Ooh! That sounds like it could be accomplished by…

\begin{bmatrix} F(t) \\ F(t - 1) \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} F(t - 1) \\ F(t - 2) \end{bmatrix}

Our rule fits exactly with the constant matrix above; the dot product of its first row and the input vector gives us our next number in the output’s first entry, and the dot product of its second row with the input vector simply copies over the last number into the second output entry. (A dot product is the sum of the products of corresponding entries of a vector. The first times the first plus the second times the second, and so on.)

We can call this constant matrix A, so that one step in the sequence is V_t = A * V_{t - 1} where V_t is the tth fibonacci vector. Non-recursively, then, V_t = A * (A * ... (A * V_0))) = A^{t} * V_{0} where V_0 =  \begin{bmatrix} 1 \\ 1 \end{bmatrix} .

What we’d like now is to somehow transform A into a diagonal matrix. Diagonal matrices only have non-zero entries in the top left to bottom right diagonal, and are easy to analyze because of that. Imagine if our function was instead V_t = B^{t} * V_{0}, where B = \begin{bmatrix} b_1 & 0 \\ 0 & b_2 \end{bmatrix} and b_1 > b_2. We can rewrite this as  V_t = b_1^{t} * \alpha + b_2^{t} * \beta for some scalars \alpha and \beta. It doesn’t really matter what they are, because at this point, we can focus on what this means for the growth rate. Since b_1 > b_2, when t increases, the b_1^{t} term dominates and thus the growth rate is pretty close to b_1^{t}.

So how do we make A diagonal to get this b_1 term? The official method is called diagonalization, and it involves finding another invertible matrix S so that the matrix product SAS^{-1} is a diagonal matrix.

Now, V_t = (SAS^{-1})^{t} * V_0 is definitely not the same as V_t = A^{t} * V_0. However, it is useful because the growth rate of its function is the same as our original one’s. (SAS^{-1})^{t} = SAS^{-1} * SAS^{-1} * ... = SA * A * ... * S^{-1} since S^{-1} * S = I. The identity matrix multiplied with another matrix is just the other matrix; S^{-1} * S is effectively canceled out. So if we substitute our new expression for A in our matrix equation function, it looks like V_t = SA^{t}S^{-1} * V_{0}. Look at that, the growth rate can also be described with A^{t}. We are close!

Finding S was outside of the scope of our course, so the textbook just gives us S = \begin{bmatrix}  \frac{1 + \sqrt{5}}{2} & \frac{1 - \sqrt{5}}{2} \\ 1 & 1 \end{bmatrix} and SAS^{-1} = \begin{bmatrix}  \frac{1 + \sqrt{5}}{2} & 0 \\ 0 & \frac{1 - \sqrt{5}}{2}  \end{bmatrix}. We can write V_t = SAS^{-1} * V_0 out as V_t = \left(\frac{1 + \sqrt{5}}{2}\right)^{t}* \alpha + \left(\frac{1 - \sqrt{5}}{2}\right)^{t} * \beta for some scalars \alpha and \beta. And so the coefficient we care about is \frac{1 + \sqrt{5}}{2}, and our growth rate is \left(\frac{1 + \sqrt{5}}{2}\right)^{t}.

Amazing!!! We can see this in action with the matplotlib.pyplot module in Python without too much trouble. I graphed the exact fibonacci numbers alongside F(k) = \left(\frac{1 + \sqrt{5}}{2}\right)^{k - 1} to compare the growth rates of the two functions. (Source code)

Here are the 450th to 475th fibonacci numbers and their counterparts in the other function. Just to emphasize the scale of these numbers: that last exact fib number is above .8E99, which is .8 x 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000. NOTE: “approximations” means “growth rate approximation functions” in these graph legends, not approximations for the fibonacci numbers themselves.

firsttwo

It looks pretty much the same at any other range of values I looked at; here’s 105th to 135th.

firsttwoearly

I also got curious and jiggled the numbers a little bit. Altering the estimated growth rate just a little bit makes the whole thing way off, it seems. I tried undershooting with \left(\frac{1 + \sqrt{4.999}}{2}\right)^{k - 1}\left(\frac{.999 + \sqrt{5}}{2}\right)^{k - 1}, and \left(\frac{1 + \sqrt{5}}{2.0004}\right)^{k - 1}. All close but clearly not as good as the original one we derived.

undershoot

Same went for overshooting with \left(\frac{1.005 + \sqrt{5}}{2}\right)^{k - 1} and \left(\frac{1 + \sqrt{5}}{.1998}\right)^{k - 1}overshoot

except for \left(\frac{1 + \sqrt{5.005}}{2}\right)^{k - 1}, which actually seemed to be way better than our first approximation! Whaaaat?all3mearly (1)

However, it doesn’t hold up so well when we look at different swathes of the sequence. Here’s 100 to 110 versus 600 to 610.

all3mearlyall3m

The growth of the growth rate (the second derivative!) seems to be increasing faster than that of the fibonacci sequence and our approximation’s. It gets noticeably steeper over time while our two original functions do not.

Now. We’ve been talking about getting a growth rate without finding a closed form for the fibonacci sequence, but it turns out we can do that too using diagonalization. I’m leaving a bunch of links going through this process here for now, until I get it and can write up my own explanation. Essentially,

“When a sequence evolves over time according to the rules of a
first order system, the eigenvalues of the matrix of that system determine the
long term behavior of the series.”

Remember when I said “V_t = b_1^{t} * \alpha + b_2^{t} * \beta for some scalars \alpha and \beta“? b_1 and b_2 are those eigenvalues, and we just need their corresponding eigenvectors and coefficients \alpha and \beta to get an exact formula for this series.

You can also get to it with calculus. Or this.

There you have it! I’ve been enjoying my math classes a lot more ever since I a) learned how to program and b) found ways to apply what I’ve learned in class, or at least look up applications online.

I haven’t always felt that way. Even when I was doing pretty well AP Calculus BC in high school or Discrete Math at college, math didn’t sit all that well with me. After every test, I’d wonder: “When will I ever use this? Why will it fall upon me, specifically, to use what I’m learning right now?” It didn’t help that I felt useless doing actual math outside the classroom. The people who practiced it were intimidating, the discipline itself was intimidating, and it wasn’t much fun until I intentionally looked for ways to make it fun. But this is a start, at least, and I’ll keep at it for the sake of those beautiful “aha!” moments.