gino wrote:I do not have your book here (shame on me), but I recall that your version first selects the proper subsimplex (through Voronoi plane tests) and computes the closest points by projection. I understand from your post that this "projection" involves solving the well-known linear system for finding a vector orthogonal to the subsimplex's affine hull, and the barycentric coordinates are used for the Voronoi test as well.
Sort of. Conceptually, yes. Practically, however, the calculations for determining the proper subsimplex has already computed the terms needed for the determination of barycentric coordinates on the feature. Also, note that it is a proper orthogonal projection (and not a "projection") because once you have determined the Voronoi feature region, the query point
will project orthogonally onto it.
So, as I make it, the only thing different from your approach with respect to the original GJK approach is the fact that you do not reuse computations from lower-dimensional subsimplices. Am I right?
As I said before: my approach
is the original GJK approach. I'm not sure why you're thinking otherwise. It's not the recursive formulation given in the paper, but it's the unrolled version as used in Daniel Johnson's own implementation. I recommend studying his implementation. I reference it in my book, but it seems the link I gave has gone stale. Erwin has a copy here:
http://www.erwincoumans.com/p/gilbert.c
(Technically this is a C translation of the original code, done by Diego Ruspini, which is nicer anyway in that it's more readable than the original FORTRAN code.)
The recursive and non-recursive approaches are for all intents and purposes identical, primarily differing on what order the subsimplicies are searched, but even that can be made the same. Again I recommend you read the code and make the comparison. Earlier computations are reused in both my code and Johnson's code. Johnson stores his to memory (in arrays del, d1, d2, etc). I keep them in registers, to avoid huge penalties in accessing potentially uncached data (penalties large enough to cover the whole GJK computation cost for a single cache line miss).
so I (and probably also Dan Johnson) would still go for a recursive determinant computation.
You should probably familiarize yourself with his code before making definitive statements about it! :) His code is fully unrolled (non-recursive and non-iterative). He does reuse previous determinant expressions, but this is basically equivalent to common subexpression elimination. Also, that code was written in the "pre-cache" era where computation was expensive (multiplications in particular) so that was a natural way of writing things. These days, it's not.
Anyway, the two approaches are the same. Math is math after all. The key difference is really that the unrolled version makes it explicit and obvious which subsimplex is being tested. The recursive formulation doesn't really (although you could obviously document it to make it clear). That's why I like the "geometric" formulation: it is more readable, and therefore more easy to argue about and to debug. The geometric view you can describe to someone and they'll understand it first try. The recursive formulation you can describe 10 times and people still don't get it.