Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how this is cheaper by efficient mechanical computation or cognitively easier for humans than Laplace expansion. Also, we were taught to use decomposition to grab eigenvectors and the det. It was pretty easy to verify correctness because of my handy HP 48G had an RPN CAS and a "spreadsheet" matrix editor.


Unless the matrices are rather small, this is much more efficient than Laplace expansion.

To compute the determinant of an n-by-n matrix with Laplace expansion, you compute n determinants of matrices one size smaller. So if we write D(n) for the cost of computing the determinant of an n-by-n matrix, we have D(n) ~= n D(n-1), which means that D(n) is of order n!.

To compute the determinant of an n-by-n matrix with Dodgson's condensation algorithm, you compute (n-1)^2 2-by-2 determinants, and then repeat with n reduced by 1, and so on until you get to a 1x1 matrix. That is, we have D(n) ~= n^2 + D(n-1), which means that D(n) is of order n^3.

For n of moderate size, n^3 is much smaller than n!.

Of course, unless n is very small you don't want to be computing determinants by hand anyway; you should use a computer, and shouldn't use either of these algorithms.

Still, for hand computation I would consider the Dodgson algorithm an excellent choice ... if it weren't for the possibility of having to divide by zero. You can work around that but then it's no longer so true that the algorithm is simple and requires little bookkeeping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: