Synopsis: Titled to be maximally search engine hostile, today's post is a whimsical look at computing eigenvalues, and computing eigenvalues in Javascript, and gives you the code for free. Eigenvalues!
LabKitty loves eigenvalues. That's fairly well established in both the Internet and squishy realms. Odd, then, that I suck at computing eigenvalues. Sure, I can work a 2x2 like Sunday morning, but beyond that I am hopeless. Finding roots of a polynomial (read: characteristic equation) is my bête noire. Quintics are of course right out. I'm told there exist formulas for quartics and cubics, but my brain has rejected that proposal like a failed organ transplant. Évariste Galois died for my sins.
Thus I turn to gronkulation, only to immediately confront a different breed of horror. My off-the-shelf computing skills are able to extract precisely one eigenvalue, and it better be real and dominant and algebraic multiplicity one. Yes, the famous Rayleigh algorithm (whom I am always confusing with the guy in the can. Prince Albert. That's a different guy, correct?). Literally beating a dead horse until it coughs up the goodies, and I cannot lie: It has a kind of appeal.
All this rather takes the fun out of eigenvalues. Bored at the party, waiting at the checkout, working double time in the seduction line, thoughts turn to eigenthoughts. But confined to a 2x2 prison I am, which come to think of it describes my experience with off-campus housing. Yes, I could model stage-structured growth or that quantum system, attractive stranger, if you would permit me to return to my office and release the Kracken ("Kracken" === R2016b running on a Darwin multicore). But if you are expecting results here and now, you, my friend, are living a lie.
Which brings us to Press.
O, how a single turn of phrase can wreck one's psyche neigh irrevocably. Bless your heart. It's not you it's me. Do you know how fast you were going? (Of course I know how fast I was going. What am I? Amish?)
Read Press* and you will encounter this quip in the chapter on eigenblerg:
Yet, there is a caveat Press in their unholy snark here omit. Paraphrased: An algorithm exists for computing eigenvalues that's not terribly complicated if you're willing to accept occasional defeat and, more importantly, a larger dollop of computational inefficiency.
I suspect Press intended to include such a caveat but Cambridge legal did the kibosh on it. Heavens! What if J. Random Reader builds an eigenproduct using our text only to have a client one day too late discover a pathological case? The next thing you know the o-rings bypass or the tailscrew fails or the lifting body pogos. In these litigious climes, the only thing Cambridge will be publishing once the dust settles is coloring books for adults. (Seriously? Coloring books for adults? What's next? Grammy awards for "rappers"?)
But there is indeed an algorithm for computing eigenvalues that's not terribly complicated if you're willing to accept occasional defeat and, more importantly, a larger dollop of computational inefficiency. It's also not something you can do by hand, Chisenbop lords notwithstanding. Tut mir leid. For better or worse, today we remain inside the gronkulatory domain.
LabKitty will tell you this algorithm because LabKitty loves you. Also, LabKitty has insufficient personal assets to make a lawsuit profitable.
As you may have guessed, this is the famous QR extraction. Invariably, textbooks are so eager to get into the gory details of QR extraction that they never take a step back and describe the less-gory big picture. And grokking the big picture is, I claim, more helpful than poring over a brick of ported FORTRAN-IV. Or at least a prerequisite. Think of today's post as a stepping stone in your blossoming eigencraft. A kind of eigenpuberty, as it were.
The big picture is based on a small collection of matrix factozoids. In no particular order:
i) A similarity transformation does not alter the eigenvalues of a matrix (if you need a refresher on similarity transformations which can only be described as "LabKittian" go here).
ii) The eigenvalues of a diagonal matrix are the diagonal entries. The eigenvalues of a block diagonal matrix are the eigenvalues of the blocks.
iii) Not all matrices are diagonalizable, but all matrices are block-diagonalizable. (You may recognize the latter as the Jordan form. Or not, if you're a milk-drinker who spent your entire existence trafficking in symmetric matrices, which are always diagonalizable.)
You probably see where this is headed, but I will continue to write words nonetheless.
You are handed a matrix. Call it A. You would like to determine all of the eigenvalues of A. If A was diagonal, you could just read them off. If A was block diagonal, you could recursively work on the blocks. But Nature being a bitch, and professors being bastards, A is neither.
Given infinite time and energy, you could beat A into submission with random similarity transformations until an equivalent diagonal or block-diagonal form emerged by chance. Quite a long shot. And anyway, in eigenvalues, as in dating, it helps to work systematically.
Think of each column of A as a vector. We seek a privileged coordinate system which simplifies the components when the vector is expressed in that basis. So we pick the first column and apply just the right similarity transformation to simplify along that dimension. Then we pick the next column and simplify, then the next, and so on. If fortune smiles, we eventually arrive at a diagonal or block diagonal version of A. Now getting the eigenvalues is straightforward.
The heart of the operation is orthogonalization. Here's the process illustrated on a 2x2 matrix (because two dimensions are the extent of my drawing ability):
Ab inito, the two vectors (columns) are expressed in terms of the standard Cartesian basis (i). The first simplification is trivial -- we just use (normalized) matrix column one itself as our new basis vector #1 (ii). Next, throw up a vector orthogonal to new basis vector #1 (iii). There is your new basis vector #2. We then find the components of matrix column two in this basis (iv). Recall b ⋅ a gives the projection of vector b along vector a (by the definition of the dot product) and b − b ⋅ a gives the component of b orthogonal to a (by the definition of vector addition). So those two ops will give you the two new components (b1 and b2, respectively).
At each step, the Cartesian basis fades a little more from the picture. By the end the end of the process, it is forgotten completely. The prize here is the zero that has appeared at the lower left corner of A -- the matrix is now triangular. Not a spectacular accomplishment for a 2x2, but this "pairwise orthogonalization" idea is readily extended to any size matrix. Each additional column gets dotted against all the columns that have been transformed thus far. And once transformed, each additional column gets to scrutinize columns yet to come. The process proceeds left-to-right until the entire matrix is processed.
This scheme gives us a surprisingly simple algorithm for eigenvalue extraction. Symmetric case first. (If A is unsymmetric, additional gronkulation is required. More later.)
Given a symmetric matrix A, do this:
The factorize( ) function does pairwise orthogonalization. It returns an orthogonal matrix Q and an upper (aka "right") triangular matrix R such that A = QR. Note the update step A = RQ applies these in reverse order. If you chase around the matrix algebra, you will discover the update is a similarity transformation. Ergo, although A changes, its eigenvalues do not. The amazing/useful thing is the iteration produces a sequence of matrices that (probably) converges to diagonal. The eigenvalues of A are the diagonal entries.
Footnote: Why this works (i.e., converges to diagonal) is nontrivial. You can look up a proof elsewhere. Also, the algorithm fails on some kinds of matrix. In these cases you must deploy additional crafty. That's why Matlab costs eleventy million dollars.
Clearly, the crux of the algorithm is the factorization. Here there are three options. We can QR by Gram Schmidt decomposition, by Householder transformations, or by Givens rotations. It's the same old story: 1) speed, 2) stability, 3) simplicity. Pick two.
Gram Schmidt is the simplest option, being literally the dot product thingy shown above. It is also much maligned, described by numerical wonks as less stable than Donald Trump. I can't tell if this is true or just a self-propagating meme, like how the first weeaboo added "no copyright intended" when s/he posted copyrighted material on YouTube, so now every weeaboo does, like it's official stay-out-of-jail legalese. (Dear YouTube weeaboos: What's keeping the Feds from kicking in your door isn't your magic shibboleth, it's the commercials YouTube started selling. But I digress.)
I really really like Gram Schmidt (because simple) so I looked into the stability issue. I wrote a Matlab that generated a random matrix, then did a QR factorization using Gram Schmidt, and if that failed I repeated the factorization using Matlab's fancy pants qr( ) function. After millions of test cases, I've yet to encounter a matrix on which Gram Schmidt failed and QR succeeded. I'm not saying I can't occasionally flummox Gram Schmidt; I'm saying if a matrix breaks one factoring approach, it breaks the other.
Hence the take home: If you're going to extract eigenvalues using QR, you may as well use Gram Schmidt for the factorization. (If you're doing professional eigenvalue work -- something in which a bad result will get somebody fired or killed -- you shouldn't be using QR. Go convene with Gilbert Strang.)
Capisce?
Eigenpotato -- The Codening
Now the Javascript. The code mostly speaks for itself (because I put comments in it):
Simplicity itself. There's literally more code here devoted to allocating stuff than to doing the actual decomposition. (Aside: My understanding of Javascript memory management is that the arrays Q and R will get eated by the garbage collector when the function exits. If that's not true, somebody be a dear and tell me I suck in the comments.)
Most of the grinding happens in the helper functions factor( ), multiply( ), and converge( ). They do what you think they do. However, I'm not going to list these just yet, because eigenpotato isn't quite ready for prime time.
Like Beatrix and O-Ren Ishii, we have unfinished business.
Building a Better Potato
Perhaps you are wondering about that ominous "occasional failure" dealio.
Eigenpotato as pitched above can be defeated rather easily. If you're assuming failure involves a matrix chocked full of transcendental funkadelia and what-all-else, I have some bad news: There exist unnervingly simple matrices that confound vanilla QR iteration. Here's one: [ 1 1 ; 1 1 ]. Here's another: [ 0 1 ; 1 0 ]. Like mathematics on most days, it's all rather depressing.
The good news is it only takes some basic smarts to start improving performance. QR likes matrices that are diagonally dominant and blatantly nonsingular. That's not hard to code. Super easy in fact. Barely an inconvenience.
Footnote: A good exercise is to pause here and see if you can modify eigenpotato to handle these meddlesome cases using nothing more than basic eigenvalue factozoids. (If you need a review of basic eigenvalue factozoids go here. TBH, here won't tell you anything helpful, but if you've made it this far, you deserve a treat.) Also, the exercise requires that you do not move your eyes to the next paragraph. I probably should have told you that earlier.
Recall the shifting theorem, which states eig(A+cI) = eig(A) + c. That is, the eigenvalues of the matrix obtained by adding a constant c to the diagonal are the eigenvalues of the original matrix plus c. So as standard operating procedure, we can have eigenpotato modify the diagonal of A before iteration to make A diagonally dominant, then subtract the same value from the diagonal of the processed results. That buys us some improved performance for almost no effort.
What is c? No value exists that works for all cases, so let's just try to do the most good. Try: the maximum row sum (technically called the "infinity norm" but whatever) plus a little. That will at least make A diagonally dominant while (fingers crossed) not introducing new problems.
Here, finally, is the complete code, including the improvements and the helper functions:
How to Use
First, cut and paste my code into your code. (If you keep the attribution that would be swell. Or if you'd really like to demonstrate gratitude for ALL OF THIS FREE HELP go to the LabKitty store and buy some swag, which keeps the lights on here at LabKitty.)
Next, allocate and fill a 2D array. Javascript doesn't have 2D arrays, but you can fake one as an array of arrays (just like C). Finally, pass the array to eigenpotato. The eigenvalues get returned on the diagonal.
For example:
Here's a few examples you can test (these are written in Matlabese -- matrix contents listed by row, with rows separated by a semicolon):
You can potato any A you'd like (as long as A is symmetric). I'm banging on an ancient MacBook and I get okay performance up to about 10x10. YMMV. Larger matrices begin to drag noticeably, and are more likely to run into convergence issues.
We now have ourselves a not-entirely-terrible eigenextractor. Yet, our efforts have really only moved the goalposts. Somewhere beyond them exist matrices that will confound even the fortified eigenpotato. Incremental improvement could continue, squashing each new problem child as it is uncovered. Alas, such hodgepodge programming is bad practice. This is why the pros take an entirely different approach to calculating eigenvalues. Matlab's eig( ) function is proprietary, but it almost certainly uses singular value decomposition. SVD is an entirely different and wholly-unfun bag of kittens. Bulletproof, yes, but coding SVD is the coding equivalent to Dave's trip through the monolith. I shant attempt it, but if you're game then Godspeed.
Epilogue -- The Unsymmetric Case
If A is not symmetric, we face the possibility of complex eigenvalues. That screws the pooch for poor eigenpotato. Javascript don't know nuthin' 'bout complex numbers, so we can't expect it to converge to a matrix with complex entries on the diagonal (and it wouldn't do that even if it could). De nihilo nihil, as Persius groused.
What to do?
I felt bad about leaving you only the symmetric case, like Juliet leaving Romeo turgid at the costume ball. Then I remembered Press does the same thing and now I don't. Hence, if you simply must Javascript eigenvalues of an unsymmetric matrix, you have two options. You could wait for me to write an improved version of eigenpotato. Or you could write it, which is the option I like better.
Step into the arena my young padawan lerner, in full knowing that sequels rarely live up to the original. For unsymmetric A, iterations will repeatedly generate a transformed matrix having either a 1x1 or a 2x2 block in the lower right corner. Peel this off -- simply stowing the eigenvalue in the 1x1 case or extracting two eigenvalues in the 2x2 case (this is where complex eigenvalues can emerge) -- then continue with the upper left rest of the matrix. Repeat until only a 1x1 or 2x2 remains (prolly switch to Householder or Givens for the factorization also). Viola! All the eigenvalues belong to you.
Alternatively, you could use one of the extant Javascript mathy packages like numeric, which not only includes an eigenvalue extractor but much else besides. That being said, "#include numeric.js" doesn't really teach you anything about eigenvalues, does it? Eyes on the prize, not on the horizon.
Eigenvalues!
LabKitty loves eigenvalues. That's fairly well established in both the Internet and squishy realms. Odd, then, that I suck at computing eigenvalues. Sure, I can work a 2x2 like Sunday morning, but beyond that I am hopeless. Finding roots of a polynomial (read: characteristic equation) is my bête noire. Quintics are of course right out. I'm told there exist formulas for quartics and cubics, but my brain has rejected that proposal like a failed organ transplant. Évariste Galois died for my sins.
Thus I turn to gronkulation, only to immediately confront a different breed of horror. My off-the-shelf computing skills are able to extract precisely one eigenvalue, and it better be real and dominant and algebraic multiplicity one. Yes, the famous Rayleigh algorithm (whom I am always confusing with the guy in the can. Prince Albert. That's a different guy, correct?). Literally beating a dead horse until it coughs up the goodies, and I cannot lie: It has a kind of appeal.
All this rather takes the fun out of eigenvalues. Bored at the party, waiting at the checkout, working double time in the seduction line, thoughts turn to eigenthoughts. But confined to a 2x2 prison I am, which come to think of it describes my experience with off-campus housing. Yes, I could model stage-structured growth or that quantum system, attractive stranger, if you would permit me to return to my office and release the Kracken ("Kracken" === R2016b running on a Darwin multicore). But if you are expecting results here and now, you, my friend, are living a lie.
Which brings us to Press.
O, how a single turn of phrase can wreck one's psyche neigh irrevocably. Bless your heart. It's not you it's me. Do you know how fast you were going? (Of course I know how fast I was going. What am I? Amish?)
Read Press* and you will encounter this quip in the chapter on eigenblerg:
You have probably gathered by now that the solution of eigensystems is a fairly complicated business. It is. It is one of the few subjects covered in this book for which we do not recommend that you avoid canned routines.This passage damaged me for decades. If Press defers to the elder gods in matters of obtaining eigenvalues, what chance do I possibly have?? Such computation will forever be privy only to the LINPACK priests. Accept your fate, mortal. Learn to enjoy losing. Choose a career path that does not require the computation of eigenvalues. "Autotroph" for example.
Yet, there is a caveat Press in their unholy snark here omit. Paraphrased: An algorithm exists for computing eigenvalues that's not terribly complicated if you're willing to accept occasional defeat and, more importantly, a larger dollop of computational inefficiency.
I suspect Press intended to include such a caveat but Cambridge legal did the kibosh on it. Heavens! What if J. Random Reader builds an eigenproduct using our text only to have a client one day too late discover a pathological case? The next thing you know the o-rings bypass or the tailscrew fails or the lifting body pogos. In these litigious climes, the only thing Cambridge will be publishing once the dust settles is coloring books for adults. (Seriously? Coloring books for adults? What's next? Grammy awards for "rappers"?)
But there is indeed an algorithm for computing eigenvalues that's not terribly complicated if you're willing to accept occasional defeat and, more importantly, a larger dollop of computational inefficiency. It's also not something you can do by hand, Chisenbop lords notwithstanding. Tut mir leid. For better or worse, today we remain inside the gronkulatory domain.
LabKitty will tell you this algorithm because LabKitty loves you. Also, LabKitty has insufficient personal assets to make a lawsuit profitable.
As you may have guessed, this is the famous QR extraction. Invariably, textbooks are so eager to get into the gory details of QR extraction that they never take a step back and describe the less-gory big picture. And grokking the big picture is, I claim, more helpful than poring over a brick of ported FORTRAN-IV. Or at least a prerequisite. Think of today's post as a stepping stone in your blossoming eigencraft. A kind of eigenpuberty, as it were.
The big picture is based on a small collection of matrix factozoids. In no particular order:
i) A similarity transformation does not alter the eigenvalues of a matrix (if you need a refresher on similarity transformations which can only be described as "LabKittian" go here).
ii) The eigenvalues of a diagonal matrix are the diagonal entries. The eigenvalues of a block diagonal matrix are the eigenvalues of the blocks.
iii) Not all matrices are diagonalizable, but all matrices are block-diagonalizable. (You may recognize the latter as the Jordan form. Or not, if you're a milk-drinker who spent your entire existence trafficking in symmetric matrices, which are always diagonalizable.)
You probably see where this is headed, but I will continue to write words nonetheless.
You are handed a matrix. Call it A. You would like to determine all of the eigenvalues of A. If A was diagonal, you could just read them off. If A was block diagonal, you could recursively work on the blocks. But Nature being a bitch, and professors being bastards, A is neither.
Given infinite time and energy, you could beat A into submission with random similarity transformations until an equivalent diagonal or block-diagonal form emerged by chance. Quite a long shot. And anyway, in eigenvalues, as in dating, it helps to work systematically.
Think of each column of A as a vector. We seek a privileged coordinate system which simplifies the components when the vector is expressed in that basis. So we pick the first column and apply just the right similarity transformation to simplify along that dimension. Then we pick the next column and simplify, then the next, and so on. If fortune smiles, we eventually arrive at a diagonal or block diagonal version of A. Now getting the eigenvalues is straightforward.
The heart of the operation is orthogonalization. Here's the process illustrated on a 2x2 matrix (because two dimensions are the extent of my drawing ability):
Ab inito, the two vectors (columns) are expressed in terms of the standard Cartesian basis (i). The first simplification is trivial -- we just use (normalized) matrix column one itself as our new basis vector #1 (ii). Next, throw up a vector orthogonal to new basis vector #1 (iii). There is your new basis vector #2. We then find the components of matrix column two in this basis (iv). Recall b ⋅ a gives the projection of vector b along vector a (by the definition of the dot product) and b − b ⋅ a gives the component of b orthogonal to a (by the definition of vector addition). So those two ops will give you the two new components (b1 and b2, respectively).
At each step, the Cartesian basis fades a little more from the picture. By the end the end of the process, it is forgotten completely. The prize here is the zero that has appeared at the lower left corner of A -- the matrix is now triangular. Not a spectacular accomplishment for a 2x2, but this "pairwise orthogonalization" idea is readily extended to any size matrix. Each additional column gets dotted against all the columns that have been transformed thus far. And once transformed, each additional column gets to scrutinize columns yet to come. The process proceeds left-to-right until the entire matrix is processed.
This scheme gives us a surprisingly simple algorithm for eigenvalue extraction. Symmetric case first. (If A is unsymmetric, additional gronkulation is required. More later.)
Given a symmetric matrix A, do this:
while (A not diagonal)
[ Q R ] = factorize(A);
A = RQ;
end
[ Q R ] = factorize(A);
A = RQ;
end
The factorize( ) function does pairwise orthogonalization. It returns an orthogonal matrix Q and an upper (aka "right") triangular matrix R such that A = QR. Note the update step A = RQ applies these in reverse order. If you chase around the matrix algebra, you will discover the update is a similarity transformation. Ergo, although A changes, its eigenvalues do not. The amazing/useful thing is the iteration produces a sequence of matrices that (probably) converges to diagonal. The eigenvalues of A are the diagonal entries.
Footnote: Why this works (i.e., converges to diagonal) is nontrivial. You can look up a proof elsewhere. Also, the algorithm fails on some kinds of matrix. In these cases you must deploy additional crafty. That's why Matlab costs eleventy million dollars.
Clearly, the crux of the algorithm is the factorization. Here there are three options. We can QR by Gram Schmidt decomposition, by Householder transformations, or by Givens rotations. It's the same old story: 1) speed, 2) stability, 3) simplicity. Pick two.
Gram Schmidt is the simplest option, being literally the dot product thingy shown above. It is also much maligned, described by numerical wonks as less stable than Donald Trump. I can't tell if this is true or just a self-propagating meme, like how the first weeaboo added "no copyright intended" when s/he posted copyrighted material on YouTube, so now every weeaboo does, like it's official stay-out-of-jail legalese. (Dear YouTube weeaboos: What's keeping the Feds from kicking in your door isn't your magic shibboleth, it's the commercials YouTube started selling. But I digress.)
I really really like Gram Schmidt (because simple) so I looked into the stability issue. I wrote a Matlab that generated a random matrix, then did a QR factorization using Gram Schmidt, and if that failed I repeated the factorization using Matlab's fancy pants qr( ) function. After millions of test cases, I've yet to encounter a matrix on which Gram Schmidt failed and QR succeeded. I'm not saying I can't occasionally flummox Gram Schmidt; I'm saying if a matrix breaks one factoring approach, it breaks the other.
Hence the take home: If you're going to extract eigenvalues using QR, you may as well use Gram Schmidt for the factorization. (If you're doing professional eigenvalue work -- something in which a bad result will get somebody fired or killed -- you shouldn't be using QR. Go convene with Gilbert Strang.)
Capisce?
Eigenpotato -- The Codening
Now the Javascript. The code mostly speaks for itself (because I put comments in it):
function eigenpotato(A)
{
// compute all eigenvalues of
// the real symmetric matrix A
// on exit, the diagonal entries
// of A are probably the eigenvalues
// this is neither the most efficient nor
// most stable way to compute eigenvalues;
// for the skinny, go to www.labkitty.com
var n = A.length;
var Q = new Array(n);
var R = new Array(n);
for (var row = 0; row < n; row++) {
Q[row] = new Array(n);
R[row] = new Array(n);
}
for (var loop=0;loop<1000*n;loop++) {
factor(A,Q,R);
multiply(R,Q,A);
if (converge(A)) return;
}
alert("No convergence");
}
{
// compute all eigenvalues of
// the real symmetric matrix A
// on exit, the diagonal entries
// of A are probably the eigenvalues
// this is neither the most efficient nor
// most stable way to compute eigenvalues;
// for the skinny, go to www.labkitty.com
var n = A.length;
var Q = new Array(n);
var R = new Array(n);
for (var row = 0; row < n; row++) {
Q[row] = new Array(n);
R[row] = new Array(n);
}
for (var loop=0;loop<1000*n;loop++) {
factor(A,Q,R);
multiply(R,Q,A);
if (converge(A)) return;
}
alert("No convergence");
}
Simplicity itself. There's literally more code here devoted to allocating stuff than to doing the actual decomposition. (Aside: My understanding of Javascript memory management is that the arrays Q and R will get eated by the garbage collector when the function exits. If that's not true, somebody be a dear and tell me I suck in the comments.)
Most of the grinding happens in the helper functions factor( ), multiply( ), and converge( ). They do what you think they do. However, I'm not going to list these just yet, because eigenpotato isn't quite ready for prime time.
Like Beatrix and O-Ren Ishii, we have unfinished business.
Building a Better Potato
Perhaps you are wondering about that ominous "occasional failure" dealio.
Eigenpotato as pitched above can be defeated rather easily. If you're assuming failure involves a matrix chocked full of transcendental funkadelia and what-all-else, I have some bad news: There exist unnervingly simple matrices that confound vanilla QR iteration. Here's one: [ 1 1 ; 1 1 ]. Here's another: [ 0 1 ; 1 0 ]. Like mathematics on most days, it's all rather depressing.
The good news is it only takes some basic smarts to start improving performance. QR likes matrices that are diagonally dominant and blatantly nonsingular. That's not hard to code. Super easy in fact. Barely an inconvenience.
Footnote: A good exercise is to pause here and see if you can modify eigenpotato to handle these meddlesome cases using nothing more than basic eigenvalue factozoids. (If you need a review of basic eigenvalue factozoids go here. TBH, here won't tell you anything helpful, but if you've made it this far, you deserve a treat.) Also, the exercise requires that you do not move your eyes to the next paragraph. I probably should have told you that earlier.
Recall the shifting theorem, which states eig(A+cI) = eig(A) + c. That is, the eigenvalues of the matrix obtained by adding a constant c to the diagonal are the eigenvalues of the original matrix plus c. So as standard operating procedure, we can have eigenpotato modify the diagonal of A before iteration to make A diagonally dominant, then subtract the same value from the diagonal of the processed results. That buys us some improved performance for almost no effort.
What is c? No value exists that works for all cases, so let's just try to do the most good. Try: the maximum row sum (technically called the "infinity norm" but whatever) plus a little. That will at least make A diagonally dominant while (fingers crossed) not introducing new problems.
Here, finally, is the complete code, including the improvements and the helper functions:
function eigenpotato(A)
{
// compute all eigenvalues of
// the real symmetric matrix A
// on exit, the diagonal entries
// of A are probably the eigenvalues
// this is neither the most efficient nor
// most stable way to compute eigenvalues
// for the skinny, go to www.labkitty.com
// make A diagonally dominant to improve
// performance (subtract c from A on exit)
var cmult = 1.1;
var c = maxrowsum(A);
dadd(A,cmult*c);
// sanity check: if c is 0, A is
// a zero matrix and we're done
if (zeroish(c)) return;
// QR iteration...
var row, col;
var n = A.length;
var Q = new Array(n);
var R = new Array(n);
for (row = 0; row < n; row++) {
Q[row] = new Array(n);
R[row] = new Array(n);
}
for (var loop=0;loop<1000*n;loop++) {
factor(A,Q,R);
multiply(R,Q,A);
if (converge(A)) {
dadd(A,-cmult*c);
return;
}
}
alert("No convergence");
}
function converge(A)
{
var kEPSILON = 0.000001;
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=row+1; col<n; col++) {
if (zeroish(A[row][col]) === false)
return false;
}}
return true;
}
function zeroish(x)
{
var kEPSILON = 0.000001;
if (Math.abs(x) > kEPSILON) return false;
return true;
}
function dadd(A,c)
{
// add (scalar) c to diagonal of A
for (var index=0;index<A.length;index++) {
A[index][index] += c;
}
}
function maxrowsum(A)
{
// return maximum rowsum of nxn matrix A
var n = A.length;
var mrs = 0;
var temp;
for (var row=0; row<n; row++) {
temp = 0;
for (var col=0; col<n; col++) {
temp += Math.abs(A[row][col]);
}
mrs = (temp > mrs) ? temp : mrs;
}
return mrs;
}
function factor(A,Q,R)
{
// do a QR factorization on A
// i.e. A = QR, where Q is ortho
// and R is upper (right) triangular
var temp;
var i,j,k;
var n = A.length;
initialize(Q);
initialize(R);
// 1st row of Q = normed 1st row of A
for(i=0;i<n;i++) R[0][0] += A[i][0]*A[i][0];
R[0][0] = Math.sqrt(R[0][0]);
for(i=0;i<n;i++) Q[i][0] = A[i][0] / R[0][0];
for (k=1;k<n;k++) {
// dot next row of A with processed cols
for (j=0;j<k;j++) {
for(i=0;i<n;i++) {
R[j][k] += Q[i][j] * A[i][k];
}
}
// get ortho comp by subtraction
for (i=0;i<n;i++) {
temp=0;
for (j=0;j<k;j++) {
temp += Q[i][j] * R[j][k];
}
Q[i][k] = A[i][k] - temp;
}
// norm the result
for(i=0;i<n;i++) R[k][k] += Q[i][k]*Q[i][k];
R[k][k] = Math.sqrt(R[k][k]);
for(i=0;i<n;i++) Q[i][k] /= R[k][k];
}
}
function multiply(A,B,C)
{
// compute C = A*B; A,B,C
// assumed nxn and allocated by caller
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
C[row][col] = 0;
for (var index=0; index<n; index++) {
C[row][col] += A[row][index]*B[index][col];
}
}}
}
function initialize(A)
{
// zero the nxn matrix A
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
A[row][col] = 0;
}}
}
function pprint(A)
{
// pretty print matrix A in an alert box
var s = " ";
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
s += A[row][col].toFixed(2) + " ";
}}
alert(s);
}
{
// compute all eigenvalues of
// the real symmetric matrix A
// on exit, the diagonal entries
// of A are probably the eigenvalues
// this is neither the most efficient nor
// most stable way to compute eigenvalues
// for the skinny, go to www.labkitty.com
// make A diagonally dominant to improve
// performance (subtract c from A on exit)
var cmult = 1.1;
var c = maxrowsum(A);
dadd(A,cmult*c);
// sanity check: if c is 0, A is
// a zero matrix and we're done
if (zeroish(c)) return;
// QR iteration...
var row, col;
var n = A.length;
var Q = new Array(n);
var R = new Array(n);
for (row = 0; row < n; row++) {
Q[row] = new Array(n);
R[row] = new Array(n);
}
for (var loop=0;loop<1000*n;loop++) {
factor(A,Q,R);
multiply(R,Q,A);
if (converge(A)) {
dadd(A,-cmult*c);
return;
}
}
alert("No convergence");
}
function converge(A)
{
var kEPSILON = 0.000001;
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=row+1; col<n; col++) {
if (zeroish(A[row][col]) === false)
return false;
}}
return true;
}
function zeroish(x)
{
var kEPSILON = 0.000001;
if (Math.abs(x) > kEPSILON) return false;
return true;
}
function dadd(A,c)
{
// add (scalar) c to diagonal of A
for (var index=0;index<A.length;index++) {
A[index][index] += c;
}
}
function maxrowsum(A)
{
// return maximum rowsum of nxn matrix A
var n = A.length;
var mrs = 0;
var temp;
for (var row=0; row<n; row++) {
temp = 0;
for (var col=0; col<n; col++) {
temp += Math.abs(A[row][col]);
}
mrs = (temp > mrs) ? temp : mrs;
}
return mrs;
}
function factor(A,Q,R)
{
// do a QR factorization on A
// i.e. A = QR, where Q is ortho
// and R is upper (right) triangular
var temp;
var i,j,k;
var n = A.length;
initialize(Q);
initialize(R);
// 1st row of Q = normed 1st row of A
for(i=0;i<n;i++) R[0][0] += A[i][0]*A[i][0];
R[0][0] = Math.sqrt(R[0][0]);
for(i=0;i<n;i++) Q[i][0] = A[i][0] / R[0][0];
for (k=1;k<n;k++) {
// dot next row of A with processed cols
for (j=0;j<k;j++) {
for(i=0;i<n;i++) {
R[j][k] += Q[i][j] * A[i][k];
}
}
// get ortho comp by subtraction
for (i=0;i<n;i++) {
temp=0;
for (j=0;j<k;j++) {
temp += Q[i][j] * R[j][k];
}
Q[i][k] = A[i][k] - temp;
}
// norm the result
for(i=0;i<n;i++) R[k][k] += Q[i][k]*Q[i][k];
R[k][k] = Math.sqrt(R[k][k]);
for(i=0;i<n;i++) Q[i][k] /= R[k][k];
}
}
function multiply(A,B,C)
{
// compute C = A*B; A,B,C
// assumed nxn and allocated by caller
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
C[row][col] = 0;
for (var index=0; index<n; index++) {
C[row][col] += A[row][index]*B[index][col];
}
}}
}
function initialize(A)
{
// zero the nxn matrix A
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
A[row][col] = 0;
}}
}
function pprint(A)
{
// pretty print matrix A in an alert box
var s = " ";
var n = A.length;
for (var row=0; row<n; row++) {
for (var col=0; col<n; col++) {
s += A[row][col].toFixed(2) + " ";
}}
alert(s);
}
How to Use
First, cut and paste my code into your code. (If you keep the attribution that would be swell. Or if you'd really like to demonstrate gratitude for ALL OF THIS FREE HELP go to the LabKitty store and buy some swag, which keeps the lights on here at LabKitty.)
Next, allocate and fill a 2D array. Javascript doesn't have 2D arrays, but you can fake one as an array of arrays (just like C). Finally, pass the array to eigenpotato. The eigenvalues get returned on the diagonal.
For example:
var n = 3;
var A = new Array(n);
for (var row = 0; row<n; row++) {
A[row] = new Array(n);
}
// fill in A here (must be symmetric)
A[0][0] = ...
eigenpotato(A);
pprint(A);
var A = new Array(n);
for (var row = 0; row<n; row++) {
A[row] = new Array(n);
}
// fill in A here (must be symmetric)
A[0][0] = ...
eigenpotato(A);
pprint(A);
Here's a few examples you can test (these are written in Matlabese -- matrix contents listed by row, with rows separated by a semicolon):
a) [ 1 1 ; 1 1 ], eigs = 2,0
b) [ 0 1 ; 1 0 ], eigs = -1,1
c) [ 1 2; 2 1 ], eigs = 3,-1
d) [ 2 1 1; 1 2 1; 1 1 2 ], eigs = 4,1,1
e) [ 2 1 -1; 1 4 3; -1 3 4 ], eigs = 7,3,0
f) [ 1 1 1; 1 0 0; 1 0 0 ], eigs = 2,-1,0
g) [ 0 0 0 4; 0 0 4 0; 0 4 0 0; 4 0 0 0],
eigs = ±4,±4
b) [ 0 1 ; 1 0 ], eigs = -1,1
c) [ 1 2; 2 1 ], eigs = 3,-1
d) [ 2 1 1; 1 2 1; 1 1 2 ], eigs = 4,1,1
e) [ 2 1 -1; 1 4 3; -1 3 4 ], eigs = 7,3,0
f) [ 1 1 1; 1 0 0; 1 0 0 ], eigs = 2,-1,0
g) [ 0 0 0 4; 0 0 4 0; 0 4 0 0; 4 0 0 0],
eigs = ±4,±4
You can potato any A you'd like (as long as A is symmetric). I'm banging on an ancient MacBook and I get okay performance up to about 10x10. YMMV. Larger matrices begin to drag noticeably, and are more likely to run into convergence issues.
We now have ourselves a not-entirely-terrible eigenextractor. Yet, our efforts have really only moved the goalposts. Somewhere beyond them exist matrices that will confound even the fortified eigenpotato. Incremental improvement could continue, squashing each new problem child as it is uncovered. Alas, such hodgepodge programming is bad practice. This is why the pros take an entirely different approach to calculating eigenvalues. Matlab's eig( ) function is proprietary, but it almost certainly uses singular value decomposition. SVD is an entirely different and wholly-unfun bag of kittens. Bulletproof, yes, but coding SVD is the coding equivalent to Dave's trip through the monolith. I shant attempt it, but if you're game then Godspeed.
Epilogue -- The Unsymmetric Case
If A is not symmetric, we face the possibility of complex eigenvalues. That screws the pooch for poor eigenpotato. Javascript don't know nuthin' 'bout complex numbers, so we can't expect it to converge to a matrix with complex entries on the diagonal (and it wouldn't do that even if it could). De nihilo nihil, as Persius groused.
What to do?
I felt bad about leaving you only the symmetric case, like Juliet leaving Romeo turgid at the costume ball. Then I remembered Press does the same thing and now I don't. Hence, if you simply must Javascript eigenvalues of an unsymmetric matrix, you have two options. You could wait for me to write an improved version of eigenpotato. Or you could write it, which is the option I like better.
Step into the arena my young padawan lerner, in full knowing that sequels rarely live up to the original. For unsymmetric A, iterations will repeatedly generate a transformed matrix having either a 1x1 or a 2x2 block in the lower right corner. Peel this off -- simply stowing the eigenvalue in the 1x1 case or extracting two eigenvalues in the 2x2 case (this is where complex eigenvalues can emerge) -- then continue with the upper left rest of the matrix. Repeat until only a 1x1 or 2x2 remains (prolly switch to Householder or Givens for the factorization also). Viola! All the eigenvalues belong to you.
Alternatively, you could use one of the extant Javascript mathy packages like numeric, which not only includes an eigenvalue extractor but much else besides. That being said, "#include numeric.js" doesn't really teach you anything about eigenvalues, does it? Eyes on the prize, not on the horizon.
Eigenvalues!
* "Press" === Numerical Recipes in Wawa, by William H. Press et al. ("wawa" === insert your favorite programming language, which better be C). It was the standard reference for mathy coding before the Internet happened, like Playboy for bewbs before the iPhone.
No comments:
Post a Comment