Saturday, March 27, 2021

Famous First Words #15: The Fast Fourier Transform -- Cooley and Tukey

Famous First Words is a recurring LabKitty feature wherein we have a look at the opening line of a legendary scientific article.

LabKitty has gushed about the FFT to anybody who will listen and to many who will not. Second only to eigenvalues in sphere, reach, and influence, the FFT remade the world. It was at once better mousetrap and longer-lasting lightbulb. Tastes great and less filling. Potato stuffing. The FFT is so celebration worthy it has even appeared on a t-shirt. And leggings! And a t-shirt!

Albeit kicking around in various forms since Gauss, the FFT didn't really get on the wish list until nerds got computers. I imagine Ada Lovelace whiled away many a frigid Nottinghamshire night dreaming of putting fewer cranks into her Babbage Engine to whack out a power spectrum. But the FFT we know and love was only announced in 1965. In perhaps the one instance of WW II failing to invent everything, the FFT slept right though the war years until necessity bumped it from slumber. Alas, the inspiration was still killeachother because of course it was. As WillyPete informs us "...Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union."

Kee-ripes. Can't humans ever invent something inspired by -- I dunno -- Bob Ross. Or sporks?

This is why we can't have nice things. Or maybe why we can.

But I digress.



Here's the opening of the paper:
An efficient method for the calculation of the interactions of a 2m factorial experiment was introduced by Yates and is widely known by his name.
Talk about burying the lede. Because only about halfway in do C&T show us the money:
If we are able to choose N to be highly composite, we may make very real gains.
"Composite" is is mathspeak for being the product of two or more factors greater than one (a power of 2, zum Beispiel). And "very real gains" turned out to be a reduction in operations required to compute the transform from about N^2 to about N ln2(N).

Like the path not taken, that reduction has made all the difference. However, it's hard to get a feel for what it means until you see it in action. So I made a Javabrowlf:






---

Pick an N with the slider, and press "Go." The top progress bar completes in N ln2(N) and the bottom in N^2.

Top fast. Bottom slow. N'est ce pas?

Yes, the maximum N here of 100 in the demo presents no problem to simply power through -- after all my animation loop was simulating ten operations per second and vanilla off-the-shelf hardware whirls in the GFLOP range these days. But Big Data (tm) is currently demanding FFTs on tens of billions of points. It doesn't matter how fast your hardware, someone is already out there with an application that will brick it. As such, there will always be a market for clever gronkulation. Indeed, hot mathozoids are at this very moment investigating whether an algorithm faster than N ln2(N) is possible. Who knows what form such black magic would assume.

Sanity check: For an N of ten billion, the N^2 progress bar above would take about 300 billion years to finish, aka 100x longer than the universe has existed. The N ln2(N) bar would already be finished assuming we started during the Battle of Hastings.

With a cheap fast transform available, people started sticking Fourier transforms in everything from clock radios to dessert toppings (and in nuclear boom monitors, presumably). The effect was transforming. As Emerson put it, speed up any nontrivial algorithm by a factor of a million or so and the world will beat a path towards finding useful applications for it. [Note added in proof: um, that might have been Press not Emerson.] You can pretty much add an FFT or two or several to any program and it will run no slower and potentially much faster. The bottleneck will always be something else in your code. It's like free money.

"An Algorithm for the Machine Calculation of Complex Fourier Series." COOLEY, J. W., AND J. W. TUKEY. Mathematics of Computation (1965), Vol. 19, No. 90, pp. 297-301.

No comments:

Post a Comment