Popular Mechanics did an interesting article/experiment on how companies handle shipping packages. Specifically, they used a set of sensors to record things like temperature and acceleration, to see how well the packages were handled (or mis-handled) during their voyage.

I think this is a really cool article because I once had an idea to do something similar for a start-up company. The idea was for a customer to receive a small tracking device which he would then place inside his package before shipping. This device would then be returned to the company to check if the package was handled correctly. The target audience is for items that are fragile or require very special handling, such as scientific instruments. Parameters I thought we could track would include acceleration (for drops/large bumps), water/humidity, temperature, magnetic field (for scientific instruments...?), and GPS location. Most of these sensors are already available in many hand held smart phones nowadays. It would also be interesting to couple this with some type of shipping insurance, since the data from a recorder would provide excellent, concerte evidence of mis-handeling.

That being said, while I find this to be an interesting article, to some extent the real question is "who cares?" We all know that these shipping companies throw our stuff around. But we also know that our orders are placed in lots of packaging to prevent damage. I love buying stuff online... books, computers, electronics. Lots of expensive and seemingly fragile stuff, yet I would say 99.9% of everything I have ever ordered has come in perfect, working condition. Maybe we should just turn a blind eye to how things are handeld mid-transit if, in the end, the company can deliver on its promise to sucessfully deliver them un-damaged. Sometimes, isn't it better just to not know what's going on?

## Search

## 29 November 2010

## 22 November 2010

### Matrix Inverse

**Oh The Complexity!**

I'm doing something that most numerical computing people would consider crazy -- I'm computing the analytical inverse of a square non-singular (although ill-conditioned) matrix. I'm doing this as a solution to a differential equation, which I need to be able to evaluate very quickly very very many times (100,000's of times), so naturally I am coding up the problem in C.

Its amazing how quickly the closed-form solution becomes complex as the order of the matrix increases. For example, the quite trivial case for the inverse of a 2x2 matrix can be written in just 5 lines:

// inv(A)

scalar = (1.00/ (A[0][0]*A[1][1] - A[1][0]*A[0][1]));

invA[0][0] = scalar * A[1][1];

invA[1][1] = scalar * A[0][0];

invA[1][0] = scalar * -1.00*A[1][0];

invA[0][1] = scalar * -1.00*A[0][1];

Ok, that was trivial! The code for a 3x3 inverse is longer, but still quite manageable.

double z;

double invX[3][3];

z = X[0][0] * (X[1][1]*X[2][2] - X[2][1]*X[1][2]) + X[1][0]*(X[2][1]*X[0][2] - X[2][2]*X[0][1]) + X[2][0]*(X[0][1]*X[1][2] - X[1][1]*X[0][2]);

z = 1/z;

invX[0][0] = z*(X[1][1]*X[2][2] - X[2][1]*X[1][2]);

invX[0][1] = z*(X[2][1]*X[0][2] - X[0][1]*X[2][2]);

invX[0][2] = z*(X[0][1]*X[1][2] - X[1][1]*X[0][2]);

invX[1][0] = z*(X[2][0]*X[1][2] - X[1][0]*X[2][2]);

invX[1][1] = z*(X[0][0]*X[2][2] - X[2][0]*X[0][2]);

invX[1][2] = z*(X[1][0]*X[0][2] - X[0][0]*X[1][2]);

invX[2][0] = z*(X[1][0]*X[2][1] - X[2][0]*X[1][1]);

invX[2][1] = z*(X[2][0]*X[0][1] - X[0][0]*X[2][1]);

invX[2][2] = z*(X[0][0]*X[1][1] - X[1][0]*X[0][1]);

Now consider the inverse of a 6x6... let me just say its messy. Very messy. Messy enough that its nearly impossible to be able to do it out by hand on a white board and sucessfully type it up into C. It was easier to find the solution using MATLAB's symbolic math toolbox, then reformat the result from MATLAB into C.

The next thing to ask -- why use an analytical solution at all? If you asked about this on a forum post, most people would direct you to some other algorithm such as QR or LU decomposition, as implemented in LAPACK or some other free "off-the-shelf" numerical software package. I have my reasons against this:

(1) Its true, most of the algorithms in LAPACK are highly optimal, however they are optomized for working with very HUGE systems of matricies (with dimensions in the 100's or even 1,000's). Generally, doing this kind of math is "trivial" on small matricies (2x2, 3x3) is very easy to do by hand, and the focus of most numerical computing work has been on really large systems.

(2) Using LAPACK adds to the complexity of the code. It requires that the correct libraries be installed. It also means that the code can't easily be compiled on another platform, for example as a CUDA or other GPU-type program. Again, it seems like overkill to make the whole program much harder to port onto different systems just to solve one little 6x6 matrix inverse.

I have also investigated 3rd party C implementations, such as Jordan Exchange or LU decomposition. As I said before, however, this is a fairly ill-conditioned matrix for some cases, and these numerical methods tend to introduce a lot of error into the result which could affect the performance of the overall algorithm. Its also true that most of these implementations require many conditional statements (e.g. for choosing columns for pivoting), and are therefore inefficient when implemented on a GPU.

In conclusion, there are a lot of great resources out there for solving large systems of equations; there are also fairly trivial solutions to simple systems. What I'm struggling with is where my problem lies -- is it trivial enough to warrent a hommade analytical solution, or does it the use of some external software library, such as LAPACK (and all the associated headache of getting that to work)?

Subscribe to:
Posts (Atom)