Impulse-based dynamic simulation library (IBDS)

Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

I thought someone might ask that.

For simplicity, consider a nonlinear equation f(x) = 0 with Jacobian J(x). The Taylor expansion around x1 is:

f(x2) = f(x1) + J(x1) * (x2 - x1) + O((x2-x1)^2))

Newton's method tries to find x2 such that f(x2) == 0:

0 = f(x1) + J(x1) * (x2 - x1)

Solving for x2 we get:

x2 = x1 - inv(J(x1)) * f(x1)

Now we usually need to iterate for this to converge:

Code: Select all

while norm(f(x)) > tol
  compute f and J
  d = inv(J) * f
  x = x - d
end
This is Newton's method for solving nonlinear equations. It is O(n^3) at best due to the inversion of J.

Let's try to reduce the cost of Newton's method by using GS to approximately invert J:

Code: Select all

while norm(f(x)) > tol1
  compute f and J
  d = 0
  while norm(J*d - f) > tol2
    perform GS iterations to solve J*d = f
  end
end
That is essentially the method used in Box2D SuperSplit. Notice that there are two levels of iterations. The outer iteration is trying to converge the solution of the nonlinear equation while the inner iteration is trying to converge the solution of the linear sub-problem.

So can we combine the two loops? With GS we can:

Code: Select all

compute J
while norm(f(x)) > tol
  perform GS iterations to solve J*d = f, update x and f after each constraint is solved
end
Notice that we are not updating the Jacobian, so this will slow convergence. But we are using a crappy GS solver anyway, so may it doesn't matter. This final algorithm is nonlinear GS (NGS). The projected part comes in when we deal with inequality constraints and we get NPGS.

Baumgarte ramping can be used like this:

Code: Select all

compute J
beta = 0.1
while norm(f(x)) > tol
  perform NGS iterations on J*d = beta * f
  beta = min(beta + 0.1, 1.0)
end
Baumgarte ramping helps to avoid overshoot on contact constraints and leads to less jitter and higher stacks.

With this algorithm I can stack 8 square boxes at (30Hz, 5iter) and around 18 at (60Hz, 10iter). And I get accurate reaction forces and friction because the velocity constraints are solved cleanly. Most of the time there is just 1 position correction iteration.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Just to make sure I get this clear. This is only about a post constraint stabilization step? We still run our normal PGS on the velocity constraints before this - though without Baumgarte stabilization?

Regarding the NGS. Did you call it this way here or is this a well know mathematic method?

So the final method you present here differs from the Jakobsen method only in the fact that he actually does update his Jacobians while you keep them constant, right?

Then in the context of contacts I could imagine that we could break contacts in the iterations. How would we handle this?

One question to the ramping. You call the factor Baumgarte in this context here, but isn't this more a damped Newton what you show here? Would this ramping help in the classical PGS solver on the velocity constraints? Though I mean something like this:

OnPreSolve:
float slop = 0.2 * C / dt

OnSolve:
float beta = Min( beta + 0.1, 1.0 );
dv = -J * v - beta * slop


I have no doubt that this gives nice result, but a plain velocity level LCP with Baumgarte is still the fastest you can do, right?



Cheers,
-Dirk
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

I posted the time stepping algorithm above, first there is a PGS for the velocities (with beta = 0), then a NPGS for the positions.

I did a google for "nonlinear gauss seidel" and got many hits, so NGS is not new. Also, look for nonlinear CG (which is very common).

To be specific in my solver I am using fresh Jacobians to compute the impulse direction, however I am not updating the effective mass. The nice thing about this algorithm is that you can choose to update the effective mass at any rate that you want or not at all. Just test it and see how it works.

Yes contacts get broken due to overshoot. For this you can use accumulated impulses (no warm starting) or just skip the contact if it separates. You can update the contact by storing local anchor points and a local normal. This is not as accurate as reprocessing the collision, but it is much faster.

new_separation = initial_separation + dot(p2 - p1, new_normal)

Here you could experiment with a fixed normal. I haven't tried that yet.

Yes in this context Baumgarte is similar to damped Newton. However, I think Baumgarte is more appropriate because of the mass weighting, which is not used in regular damped Newton.

I haven't tried ramping the beta for regular Baumgarte stabilization. This might improve things, but you will still have the velocity state pollution.

I think NPGS is better than regular split impulses. It works well for stacking and joints and it is potentially faster on average due to early termination and better convergence. Also, remember that NPGS is working against the position error before it is seen, while BPGS (Baumgarte PGS) is working against the old position error.

The bottom line is that we are paying a small amount of performance for a higher fidelity simulation. It might even be possible to reduce the number of velocity iterations when combined with NPGS.

One thing I should mention is that NPGS only sees the holonomic constraints - friction, restitution, and motors are not processed.

Finally, NPGS can overshoot contacts severly in deep penetration cases. Therefore, you should limit the size of the position deltas. This warrants further investigation.
mewert
Posts: 52
Joined: Sat Oct 08, 2005 1:16 am
Location: Itinerant

Post by mewert »

So they are using a Non-linear Gause-Siedel solver in this paper then:

http://www.cs.columbia.edu/cg/ESIC/esic.html

Interesting. Wonder how well it works... cloth is so fragile, I'm affraid to try :) Would be good if it plays well with contact constraints, since it would probably end up being faster than a verlet/relaxation position-based solver due to only needing one collision check for good quality.
mewert
Posts: 52
Joined: Sat Oct 08, 2005 1:16 am
Location: Itinerant

Post by mewert »

Just to clarify my post: Jacobsen's "verlet/relaxation position-based solver" is also a NGS solver. Just a first order one ( directly modifies position ), and the one proposed in that paper I mentioned is second order ( works on velocity ). The position-based solver suffers from the contact configuration changing during your internal iterations. So you need to do collision checks as part of your GS iteration. With the velocity-based approach, that shouldn't be the case, however you probably do want to do a collision check once for every pass through the outer loop of the NGS.

position based:

do for n iterations
enforce distance constraints, updating positions and constraints
detect and enforce collision constraints, updating positions

velocity based:

while( constraints too violated )
detect collisions
do for n iterations
enforce distance constraints, updating dx ( but not constraints )
enforce collision constraints, updating dx ( but not constraints )
mewert
Posts: 52
Joined: Sat Oct 08, 2005 1:16 am
Location: Itinerant

Post by mewert »

Ack. Where is the "tab" button? hmmm... I'll just work with what I have. RTFM, clearly not an option ;)

do for n iterations
:arrow: enforce distance constraints, updating positions and constraints
:arrow: detect and enforce collision constraints, updating positions

velocity based:

while( constraints too violated )
:arrow: detect collisions
:arrow: do for n iterations
:arrow: :arrow: enforce distance constraints, updating dx ( but not constraints )
:arrow: :arrow: enforce collision constraints, updating dx ( but not constraints )
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

Yes, in my position NGS loop, I update contacts approximately by storing the local anchor points in the two bodies and computing the new separation as the bodies move. I suppose this is similar to the persistant manifolds used with GJK in Bullet.

My contact iteration update is nonlinear because I use the exact body rotations. I don't know how much of a benifit this is for contacts, but it seems to be very useful for joints, and I think it is best to be consistent with the position updates.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

What is the benefit of using dv = (x_j - x) / h? And how is this used in the integrator? How are contacts and bending handled?

Do they integrate forward without constraints only considering gravity and bending forces? Then next satisfying the edge constraints? Does a this follow a pass over the contact constraints? I understand how they find the correction for the constraints, but how does the whole timestepping work? I don't see the complete method and in which framework they fit it?

Michael, should we move this topic into his own thread, so we don't abuse Jan's discussion?
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

Michael, if you look at line 3 of their algorithm they use C(x_j) and on line 6 they are incrementing j. So that means to me that they are recomputing C (it is always fresh when used). Also their effictive mass and Jacobians (gradients) are also updated. This is pure NGS. Wouldn't it be nice if they called it that?

Dirk, the velocity update at the end is a cheap way of applying velocity constraints. You won't get good static friction or reaction forces with this solver.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Thanks Erin. And how do they handle bending?
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

I searched through the paper (Ctrl+F) for "bending" and found this:
V(x) is the stored energy (e.g., bending, shear, and gravity
:wink:
mewert
Posts: 52
Joined: Sat Oct 08, 2005 1:16 am
Location: Itinerant

Post by mewert »

Well, it's not strictly an NGS solver. Since for "linear system (7)" you can solve for the Lagrange multipliers however you like. With a non-iterative sparse matrix solver or something. But it seems to me the structure of matrix allows for implementation of GS with an accumulation vector, and thus would be by far the most efficient way of solving the system. I think if I implemented this I would keep the Jacobian stationary and see if that works.
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine
Contact:

Post by Erin Catto »

Ah, you are right. How on earth can they call it a "Fast" method when they are solving the linear system exactly?

Still, they are using a standard method and putting their own label on it. It's just a Newton solver for constrained minimization.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Thanks - I saw this as well Erin :-), but I meant if they find "candidate positions" with some integration methods e.g. symplectic Euler or more propably IMEX? (While we treat nearly everything, but weak springs as constraints) And then they use their correction scheme for the constraint forces? So it is what we are doing for a long time, but with two differences:

a) They might use more sophisticated integration methods to find candidate positions
b) The use a Newton method as opposed to NGS where they solve the "inner" LS with maybe a CG method


It don't quite understand where they handle collision then? Is this another phase after integration and constraint force correction?


And the final question to bring this thread again back to the original discussion. Both Jan and the guys in the paper seem to use position constraints to adjust the velocity. So the difference to Erin's method (or ODE) is that they see position and velocity correction somehow as a combined problem, while Erin treats those separately with SI (velocity level) and NGS (position level). What is the physical justification for this? What about energy conservation and how do we treat non-holonomic constraints then?
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany
Contact:

Post by Jan Bender »

Hi,

sorry for the late answer.
I've been thinking about position projection. Your position correction is basically a projected Gauss-Seidel solver for a nonlinear inequality.
Yes, but only the iterative method. I also published papers about a SLE method and a linear-time method. In the next days I want to integrate the linear-time method in IBDS, if you want to try.

minimize 1/2 d^T M d
such that C(x + d) >= 0

In your case x = x0 - 1/2*g*h^2 and d = h * v.

To solve this problem we need to compute Lagrange multipliers (the impulses). This leads to the system:

M * d - J^T * impulse = 0
C(x) + J * d >= 0
impulse >= 0

We can solve this system

d = inv(M) * J^T * impulse
K * impulse >= -C, impulse >= 0
So in these equations you just regard particles without rotation and M is the mass matrix. Right?
So this just addressed position errors. Do we still need to consider velocity errors? I think so for a number of reasons:
- velocity constraints let us handle restitution and friction (nonholonomic constraints).
- used properly, velocity constraints can supply us with reaction forces.
- we can use velocity constraints to prevent position errors from growing.
And velocity constraints are well suited to implement a possibility for the user to interact with the system.
In my interpretation, your solver works like this:
1) Use old velocity and gravity to update position.
2) Project the position errors using NPGS.
3) Use gravity to update velocity.
4) Project the velocity errors using PGS.
Roughly speaking my method works like that.


So I propose the following algorithm:
1) Update velocity according to external forces.
2) Project the velocity errors using PGS.
3) Use the new velocity to update positions.
4) Project the position errors using NPGS (nonlinear PGS). This step would not use the velocity state at all.
I discussed about that with Erwin at Siggraph. This only works using a symplectic integrator. I didn't use one because of the error you get, since one of the primary goals of my PhD was an accurate simulation. On the other hand it is exactly the thing that is proposed by Guendelman et al. and you don't get a PhD, if you do just the same things as everybody else :wink:

I think my method also works fine and I want to try to continue with that. I also implemented Guendelman's approach a few years ago. Maybe I integrate this approach as another option in my library.

Jan
Post Reply