I've pasted the psuedo code for a Gauss-seidel iterative solver here from Kenny's thesis for reference.
----
delta = b(idx) - d(idx)*x(idx)
for j=0 to idx-1
delta -= J(idx,j) * F?(j)
next j
for j=idx+1 to n-1
delta -= J(idx,j) * F?(j)
next j
x(i) = x(i) + delta
F? += J?(idx,.)*delta
----
There is a very good optimization that you can do if you realize that delta is a sum of only the elements in which F(j) has changed and F(j) will change only if the velocity of a body involved in constraint j has changed. So if you keep around and update an accumulator for each body, those two O(n) loops can be written as one matrix multiply each. You need to keep your Jacobians split into their JM-1Jt factorization as well.
I came across this optimization on my own, even though I have actually seen code that uses it. I missed it before because I was busy keeping my Jacobians straight, and it's usually kind of obfuscated. Going back now I've noticed it's in ODE, it's in Erin Catto's paper, it's all over! My question is this: Is everyone arriving at this optimization on their own, or has it been detailed somewhere before? If so, what other order of magnitude optimizations am I missing out on!
- Michael Alexander Ewert
LCP solver optimization
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
Re: LCP solver optimization
Most of these things is something you pick up over the years. ODE is a great source for studying many of these kind of "optimizations".mewert wrote: I came across this optimization on my own, even though I have actually seen code that uses it. I missed it before because I was busy keeping my Jacobians straight, and it's usually kind of obfuscated. Going back now I've noticed it's in ODE, it's in Erin Catto's paper, it's all over! My question is this: Is everyone arriving at this optimization on their own, or has it been detailed somewhere before?
Well for Gauss-Seidel I would say that the next logical step is a two stage LCP solver. Say the full problem is defined by the matrix equation (w and lambda being complementary):mewert wrote: If so, what other order of magnitude optimizations am I missing out on!
w = A lambda + b
where A = J M^-1 J^T and b = J u + F_ext. Then for n-contact points you get n 3x3 LCPs looking something like this
w_i = J_ii^T M_i^-1 J_ii lambda_i + b_i + sum_j J_ij F_j
which we can simplify to drop the F-terms
w_i = J_ii^T M_i^-1 J_ii lambda_i + b_i + sum_j (w_j - b_j)
Setting A_i' = J_ii^T M_i J_ii and b_i'= b_i + sum_j w_j - b_j, you end up with an LCP defined by the
w_i = A_i' lambda_i + b_i'
These smaller LCPs can be solved cheaply using some direct method. These kinds of solvers have been described by M.Jean and J.J.Moreau, so it is really nothing new.
There is a more recent paper by Renouf, Acary, and Dumont describing the same thing, As I recall they refer to it as a blocked non-smooth projected gauss-seidel. However, I do not think they reported any convergence results.
Notice that keeping b_i' from iteration to iteration, you can update this really fast once you have solved for a lambda_j. I believe with such a method you exploit data-locality better, besides some of the nasty index book-keeping and cache worries seem to disappear.
However, you can solve all the sub-problems in parallel and need I then say GPU or PPU.... One can even solve the smaller problems in a non-linear manner, this really allows one to use anisotropic friction etc. Personally I would favor using a friction splitting method. I.e. first solve for the normal force, then solve for the frictional forces by projecting onto some limit surface.
-
- Posts: 316
- Joined: Fri Jul 01, 2005 5:29 am
- Location: Irvine
Yes, it is strange that no convergence results are provided. In fact I rarely see good benchmarks for new algorithms/solvers.
I think this is a good benchmark:
How many cubes can the algorithm/solver stack vertically at 60Hz with no gravity, damping, or deactivation? If the algorithm is iterative, the iteration could should be stated (and preferably limited to 10). Also, what is the CPU/RAM cost as a function of the number of boxes? Shock propagation should not be used.
Here's one thing that can improve convergence. I'm not sure if it has been mentioned before. Consider a vertical stack of boxes. Your intuition may tell you that solving the PGS according to the order of the stack may help. This is true. It turns out that solving from the top downward to ground effectively gets you a free iteration versus the opposite order.
This is essentially a free optimization if you already have a 'distance to ground' measure on your bodies and contacts. If not, that can be determined using an O(n) breadth-first search.
I think this is a good benchmark:
How many cubes can the algorithm/solver stack vertically at 60Hz with no gravity, damping, or deactivation? If the algorithm is iterative, the iteration could should be stated (and preferably limited to 10). Also, what is the CPU/RAM cost as a function of the number of boxes? Shock propagation should not be used.
Here's one thing that can improve convergence. I'm not sure if it has been mentioned before. Consider a vertical stack of boxes. Your intuition may tell you that solving the PGS according to the order of the stack may help. This is true. It turns out that solving from the top downward to ground effectively gets you a free iteration versus the opposite order.
This is essentially a free optimization if you already have a 'distance to ground' measure on your bodies and contacts. If not, that can be determined using an O(n) breadth-first search.
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
At least for graphics I think this is very true, in my opinion engineering and applied math seem to do better, but they often don't care about real-time:-)Erin Catto wrote:Yes, it is strange that no convergence results are provided. In fact I rarely see good benchmarks for new algorithms/solvers.
I have discussed this benchmarking problem with colleagues from time to time. The best idea I have encountered so far is to make a public repository of configurations, represented as a collection of contact points.
I was hoping that something like COLLADA could be used for such a purpose, but I haven't really kept track on the progress with COLLADA lately.
I agree it should be the raw solver method you test.Erin Catto wrote: I think this is a good benchmark:
How many cubes can the algorithm/solver stack vertically at 60Hz with no gravity, damping, or deactivation? If the algorithm is iterative, the iteration could should be stated (and preferably limited to 10). Also, what is the CPU/RAM cost as a function of the number of boxes? Shock propagation should not be used.
Generally speaking, I think single box-stack's are the most difficult kind of stacking configuration for an iterative solver. As soon as you get more structure into the configuration like brick-walls or towers then ``information'' tends to flow faster and convergence is improved.
Another thing is line-searches, I can't recall having seen these used for LCPs in graphics. They are cheaper than doing reordering and may boost convergence, but they will not change the convergence rate:-(Erin Catto wrote: Here's one thing that can improve convergence. I'm not sure if it has been mentioned before. Consider a vertical stack of boxes. Your intuition may tell you that solving the PGS according to the order of the stack may help. This is true. It turns out that solving from the top downward to ground effectively gets you a free iteration versus the opposite order.
This is essentially a free optimization if you already have a 'distance to ground' measure on your bodies and contacts. If not, that can be determined using an O(n) breadth-first search.
-
- Posts: 52
- Joined: Sat Oct 08, 2005 1:16 am
- Location: Itinerant
To do a standard test for your LCP solver you really need to take the collision detection and manifold management code out of the equation, so a standard list of contact points would work pretty well. You'll also need to specify the acceptable error tolerance that you exit with.
We do handle 3x3 blocks with direct solvers, this seems to be the best way to get good friction ( non-linear ) from contacts. Constraining 3 linear dofs away is much more efficient using a direct solver for those blocks as well.
Solving an entire contact manifold between two bodies with a dedicated direct solver might help convergence, then you've kind of reduced the problem to one of iterating over contacts between spheres. That is what this paper appears to be doing:
http://www.research.rutgers.edu/~kaufman/ffdfrb.html
They only seem to use 1 iteration ( I don't know what their time-step size is, though ). The math they are using is new to me, though.
I know that solving pairs of contacts with a direct solver helps convergence.
- Michael Alexander Ewert
We do handle 3x3 blocks with direct solvers, this seems to be the best way to get good friction ( non-linear ) from contacts. Constraining 3 linear dofs away is much more efficient using a direct solver for those blocks as well.
Solving an entire contact manifold between two bodies with a dedicated direct solver might help convergence, then you've kind of reduced the problem to one of iterating over contacts between spheres. That is what this paper appears to be doing:
http://www.research.rutgers.edu/~kaufman/ffdfrb.html
They only seem to use 1 iteration ( I don't know what their time-step size is, though ). The math they are using is new to me, though.
I know that solving pairs of contacts with a direct solver helps convergence.
- Michael Alexander Ewert
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
Spheres? The mathematical model of a velocity based complementarity based formulation is for ``simultaneous'' contacts between rigid objects... solving it with a PGS is just one way to get around the numerics...mewert wrote: Solving an entire contact manifold between two bodies with a dedicated direct solver might help convergence, then you've kind of reduced the problem to one of iterating over contacts between spheres.
The mathematical model in Kaufman paper is per object pair, as I see it they gave up on simultaneous contacts, and they lost Newton's third law doing it...
I really don't see the big difference in these methods. Only on two accounts. They iterative over bodies and they get maximum dissipative force for single point contacts (something the standard velocity based complementarity formulation don't). Notice that the way maximum dissipation is used to project impulses can be reformulated as an LCP...mewert wrote: That is what this paper appears to be doing:
http://www.research.rutgers.edu/~kaufman/ffdfrb.html
They only seem to use 1 iteration ( I don't know what their time-step size is, though ).
The argument that it should be faster to iterate over bodies than contacts is a bit artificial in my opinion, since the number of contacts scale linearly with the number of bodies. I think it boils down to the constants involved in the methods. I am worried about setting up the convex QPs, this seems more expensive than merely iterating over contacts and using a 3x3 direct method, because the QP method have to find the contacts to in the first place.
I am very concerned about the method in this paper. They gave up on Newton's third law, which in my opinion seems to be a horrible thing to do if you want large scale structured stacking. Also it appears to me that the modeling of bounciness introduces too much energy into the simulation. I would have loved to see some plot of the energy behavior of this contact model. Notice that the unelastic impulse are added with the frictional impulse, but the frictional impulse also contains an unelastic normal contribution. As far as I can tell from intuition without writing down some equations, this resulting impulse will not lie in the permissible region of impulse space.
I did find the examples in the paper rather nice, although all large scale simulations were for dense random piles, it is difficult to see how big penetration errors get or if the concerns about energy and Newton's third law I expressed above is justified or not.
Twist and wrenches are standard notation in robotics, its very similar to spatial algebra.mewert wrote: The math they are using is new to me, though.
-
- Posts: 26
- Joined: Sat Jul 23, 2005 12:56 am
- Location: LA
Isn?t that similar to what "Shock wave propagation? does when it arbitrarily sets the mass of rigid bodies to infinity based of intuition and without any mathematical or physical explanation.kenny wrote:I am very concerned about the method in this paper. They gave up on Newton's third law, which in my opinion seems to be a horrible thing to do if you want large scale structured stacking. Also it appears to me that the modeling of bounciness introduces too much energy into the simulation
I am not expert but in my mind switching the mass of a body to infinity and dampening the velocity of a bodies at contact points position are much larger and questionable violations approaches and yet they seem to be accepted and embraced by the community as good replacements of Newton first second and third law.
-
- Posts: 316
- Joined: Fri Jul 01, 2005 5:29 am
- Location: Irvine
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
No, and I think there is three reasons for this. Firstly, the book-keeping for finding stack-layers are too costly. Secondly, the ``pure original way'' of using shock-propagation have this little weight feeling problem:-) Lastly the behavior (or quality) dependends on how you make your stack analysis.Erin Catto wrote:My impression is that shock propagation is not used in commercial engines. It makes for nice demos, but it creates odd behavior in real game play.
Does anyone know of a commercial engine that uses shock propagation?
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
I should make it explicit here that I am talking about using shock-propagation in a velocity-based compl. formulation and not an impulse based simulator!Julio Jerez wrote:Isn?t that similar to what "Shock wave propagation? does when it arbitrarily sets the mass of rigid bodies to infinity based of intuition and without any mathematical or physical explanation.
I am not expert but in my mind switching the mass of a body to infinity and dampening the velocity of a bodies at contact points position are much larger and questionable violations approaches and yet they seem to be accepted and embraced by the community as good replacements of Newton first second and third law.
So yes you change the masses of the bodies, but not the actual underlying mathematical model. In my opinion Newton laws are still fufilled. I see shock-propagation as a perfect directional preconditioner, that is a way to get around the numerics of solving the velocity-based complementarity formulation efficiently. The bottom-top strategy is just one choice, in fact with shock propagation you get a block-paritioning. Setting mass to infinity just means you are disallowing ``information'' to flow in a certain direction while processing these blocks.
I do not think shock-propagation combined with velocity based complementarity formulation is a perfect solution for everything. There is still a few issues that need to be worked out....
-
- Posts: 2
- Joined: Thu Jul 07, 2005 12:27 am
- Location: Mountain View, California
hi all,
i'm arriving to this discussion late, and trying to understand what shock-propagation can do. kenny, my (perhaps poor) understanding is that it's akin to the multigrid method commonly used in PDEs, i.e. you form a coarser representation of the problem, solve that, then use that as a starting point for the fine-grained solution. is this analogy correct?
the pure multi-grid approach does not apply directly to rigid body systems since there is no (obvious) well-defined way to do the coarsining, but i have heard many people speculate about this as a good way to go.
russ.
i'm arriving to this discussion late, and trying to understand what shock-propagation can do. kenny, my (perhaps poor) understanding is that it's akin to the multigrid method commonly used in PDEs, i.e. you form a coarser representation of the problem, solve that, then use that as a starting point for the fine-grained solution. is this analogy correct?
the pure multi-grid approach does not apply directly to rigid body systems since there is no (obvious) well-defined way to do the coarsining, but i have heard many people speculate about this as a good way to go.
russ.
-
- Posts: 52
- Joined: Sat Oct 08, 2005 1:16 am
- Location: Itinerant
What I mean by spheres is this. Since their method projects all the contact points between two bodies into a single point in the se3 algebra, their technique can solve that manifold in one direct solve. The manifold between two spheres is a single point. So it's like a collection of sphere in se3. This seems like a good idea to me, since an LCP solver approach appears to spend a lot of time just getting the jitter between two bodies to settle down. This is due to the contact points on a single manifold creating a teeter-totter type effect around the centre of mass of the bodies. With an iterative LCP solver you would have to iterate multiple times over the the contacts between two bodies just in order for that manifold to be solved. Thus it seems convergence would be improved.kenny wrote: Spheres? The mathematical model of a velocity based complementarity based formulation is for ``simultaneous'' contacts between rigid objects... solving it with a PGS is just one way to get around the numerics...
I agree that it looks a bit rough still, and I doubt an implementation of it would beat a good iterative LCP solver. But it's a good concept and I think it's worth investigating further.kenny wrote: I am worried about setting up the convex QPs, this seems more expensive than merely iterating over contacts and using a 3x3 direct method, because the QP method have to find the contacts to in the first place. I am very concerned about the method in this paper. They gave up on Newton's third law, which in my opinion seems to be a horrible thing to do...
Yes, it's like featherstone's algorithm. Which does the same trick of collapsing constraints into a single point in se3 algebra and solving it there.kenny wrote: Twist and wrenches are standard notation in robotics, its very similar to spatial algebra.
- Michael Alexander Ewert
-
- Posts: 32
- Joined: Thu Jun 30, 2005 8:49 am
- Location: Denmark
I believe shock propagation is good for partitioing your dynamics into subgroups. I do not think it can give your the "coarsing" needed for multigrid.russ wrote:i'm arriving to this discussion late, and trying to understand what shock-propagation can do. kenny, my (perhaps poor) understanding is that it's akin to the multigrid method commonly used in PDEs, i.e. you form a coarser representation of the problem, solve that, then use that as a starting point for the fine-grained solution. is this analogy correct?
In terms of PDEs I guess shock propagation tastes like setting imaginary boundary conditions on a sub-part of your computational domain... (I am no expect in this field, so the analogy might be a bit off) and then solving this sub-part before going on to the next sup-part.
I have tried to exploit contact graphs for making the coarsing. Collecting nodes into composite bodies (much like Featherstone exlpain in his articulated body mehtod), but it got too hairy for me so I gave up:-)russ wrote: the pure multi-grid approach does not apply directly to rigid body systems since there is no (obvious) well-defined way to do the coarsining, but i have heard many people speculate about this as a good way to go.