Impulse-based dynamic simulation library (IBDS)

Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany

Post by Jan Bender »

The problem is not the integration of the velocities, positions and rotation. The parabolic integrator is very good for our work (in a system without springs) since we never have a continuous constraint force or impulse in the equation of motion. This is the thing I wanted to tell you. The velocity is modified by impulses before the integration starts. During the integration neither the velocity nor the acceleration is modified. In fact we just need to simulate a ballistic motion (no constraints). The constraints are satisfied due to the changed velocities in the beginning of the step. In fact we determine impulses, so that they change the velocities in the beginning of the step. The impulses are computed so that the constraints are satisfied after a pure ballistic time step.

Of course the parabolic integrator can't be used for rotation integration. I use a Runge-Kutta of fourth order for this. By the way how does the rotation integration work using a symplectic Euler? I want to implement the symplectic Euler for IBDS. Anyway it is possible in my library to exchange the integrator at any time. So why not implementing another one.

Jan
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Rotations are also integrated using symplectic Euler, but we ignore the coriolis term for stability
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany

Post by Jan Bender »

Rotations are also integrated using symplectic Euler, but we ignore the coriolis term for stability
So you solve the equations:

angVel' = J^-1 * torque
q' = 0.5 * angVel * q

in this way

angVel(t+dt) = J^-1(t) * torque(t) * dt
q(t+dt) = 0.5 * angVel (t+dt) * q(t)

Right?

Jan
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Nearly, for integrating the angular velocity we use the exponential map. See this discussion here and follow Antonio's link or look at the implementation in Bullet.

http://continuousphysics.com/Bullet/php ... ential+map
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

Jan: I'm confused. You say the impulses have no effect on the accuracy of the solution, but the paper you referred me to analyzes the effect of the impulses on the accuracy of the solution.

I think I understand your method, but you haven't convinced me that the parabolic integrator is more accurate than symplectic Euler when you have constraints.

Dirk: I didn't find a big advantage to using the exponential map, so I used the cheaper method. What improvement did you see?
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Hah, I asked this on the forum a while ago here http://continuousphysics.com/Bullet/php ... ential+map, because I also don't see any advantage. So I wondered if anybody can give me a real reason to use it. Unfortunately KenB only made some mystical statements about molecular dynamics, but when Antonio asked about it he didn't get an answer.

So sorry, I can't answer this. You might don't need to normalize as often, but I doubt if this can be considered a real advantage.

-Dirk
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany

Post by Jan Bender »

Jan: I'm confused. You say the impulses have no effect on the accuracy of the solution, but the paper you referred me to analyzes the effect of the impulses on the accuracy of the solution.
I say that the impulse has no effect on the accuracy of the integrator. The impulse is regarded during integration. Just the velocity of the body at the beginning of the integration changes.
I think I understand your method, but you haven't convinced me that the parabolic integrator is more accurate than symplectic Euler when you have constraints.
Since there is no continuous force or impulse in my equation of motion (even if there are constraints), the parabolic integrator is absolutely exact and very easy to compute.

Jan
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

Jan: so using your method, you get an exact solution for a pendulum? That is better than any integrator I have ever heard of.
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany

Post by Jan Bender »

Jan: so using your method, you get an exact solution for a pendulum? That is better than any integrator I have ever heard of.
No, of course I don't get the exact solution. I would get it, if I could compute the exact impulse and if there wouldn't be any rotation.
In my method first I compute a correction impulse using the prediction of the joint state. Roughly speaking this is a kind of integration of the constraint forces and this process is not exact. Then the impulse is applied, so it changes the actual velocities. After that the integration of the velocities and positions is done using the parabolic integrator for positions and a Runge-Kutta method for rotations. So in this case the parabolic integrator is exact. The error of the method just occurs when computing the impulse.
The advantage of the method is that you can guarantee a maximum distance of the joint points at any step. So no Baumgarte or any other stabilization is required since the method is self-stabilizing. In fact we have even used our method once to stabilize a Lagrange multiplier method.

Jan
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

So if the solution is not exact, what is the order of the error in terms of h?

Do you have a guaranteed upper bound on the number of iterations necessary to guarantee a maximum distance of the joint points?

Have you tried simulating complex closed loop mechanisms, such as the bricard mechanism? The bricard mechanism is known to be difficult to simulate due to the constraint redundancy.

http://www.tu-chemnitz.de/ifm/english/e ... ricard.htm
bone
Posts: 231
Joined: Tue Feb 20, 2007 4:56 pm

Post by bone »

Erin Catto wrote:Have you tried simulating complex closed loop mechanisms, such as the bricard mechanism? The bricard mechanism is known to be difficult to simulate due to the constraint redundancy.

http://www.tu-chemnitz.de/ifm/english/e ... ricard.htm
Just to be clear, it is supposed to move, correct (because of the aforementioned constraint redundancy)? I'm curious which methods (such as Featherstone) can actually handle this.
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

Yes, the bricard mechanism moves in an interesting way. In one configuration it resembles the perimeter of a box, and fully extended it has the shape of a 2D equilateral triangle.

I've only been able to simulate the bricard mechanism efficiently with the joint coordinate (Featherstone) model. The model was augmented with an explicit loop constraint, since joint coordinate models don't handle loops (in general).
Jan Bender
Posts: 111
Joined: Fri Sep 08, 2006 1:26 pm
Location: Germany

Post by Jan Bender »

So if the solution is not exact, what is the order of the error in terms of h?
The error is of O(h^3). I hope I will be able to implement the methods of higher order correctly. Then the error term will be smaller.
Do you have a guaranteed upper bound on the number of iterations necessary to guarantee a maximum distance of the joint points?
No, you can't guarantee something like that. It is possible to simulate even complex models with interrupting the iteration after a few steps. For example you can even simulate a tree with 255 ball joints using just 5-10 iterations. Of course then you can't guarantee the maximum distance. Another possibility is a kind of shock propagation after a few iterations when simulating collisions or contacts (see also the work of Guendelman).

For joints you can also use my SLE method or my linear-time method (for acyclic models). Then you usually get the correct impulses after 1-3 iterations when using a maximum distance of 1.0e-6 m.

Have you tried simulating complex closed loop mechanisms, such as the bricard mechanism? The bricard mechanism is known to be difficult to simulate due to the constraint redundancy.
Yes, I also mentioned this model in my thesis. The iterative approach has no problems with loops. I just simulated this model using a time step size of 1/30s. The SLE method was about 16 times faster than real-time.

Jan
Erin Catto
Posts: 316
Joined: Fri Jul 01, 2005 5:29 am
Location: Irvine

Post by Erin Catto »

I've been thinking about position projection. Your position correction is basically a projected Gauss-Seidel solver for a nonlinear inequality. We can write the problem as mass-weighted constrained optimization problem:

minimize 1/2 d^T M d
such that C(x + d) >= 0

In your case x = x0 - 1/2*g*h^2 and d = h * v.

To solve this problem we need to compute Lagrange multipliers (the impulses). This leads to the system:

M * d - J^T * impulse = 0
C(x) + J * d >= 0
impulse >= 0

We can solve this system

d = inv(M) * J^T * impulse
K * impulse >= -C, impulse >= 0

where K is the effective mass (J * inv(M) * J^T). To get the solution we need to invert K, but K can be big so we use cheap Gauss-Seidel iteration to solve each constraint sequentially (perhaps block-wise). The additional trick here is that we adjust x after each GS iteration and therefore compute C(x) for each iteration of each constraint. This immediate update of C(x) is not possible if we were to directly invert the big K that governs the entire system. I suspect the immediate update of C(x) improves the convergence of NPGS (nonlinear PGS). In the field of optimization this is technique is used for nonlinear conjugate gradients (NCG).

NPGS is essentially what Thomas Jacobsen (and others before him) did with particle systems. The only difference is that a single the particle distance constraint can be solved exactly. Many constraints cannot be solved exactly (while being true to the mass weighting), so we use the linear approximation above. We have to iterate anyway because of the constraint graph, so linearization is not a huge loss.

We must be extra careful with inequality constraints like contact and joints limits. The position projection can easily overshoot and lead to jitter. So we need to employ accumulated impulses, slop regions, and/or ramped Baumgarte factors. Yes this is messy, heuristic, ad-hoc, artificial, etc, but nonlinear inequatlity equations cannot be solved exactly in a reasonable amount of time. So we do what we've got to do.

So this just addressed position errors. Do we still need to consider velocity errors? I think so for a number of reasons:
- velocity constraints let us handle restitution and friction (nonholonomic constraints).
- used properly, velocity constraints can supply us with reaction forces.
- we can use velocity constraints to prevent position errors from growing.

In my interpretation, your solver works like this:
1) Use old velocity and gravity to update position.
2) Project the position errors using NPGS.
3) Use gravity to update velocity.
4) Project the velocity errors using PGS.

I am a little uncomfortable with this algorithm because:
- Step 1 makes the position errors larger than necessary: imagine you have a block sitting on a table. With a 1 second time step the block will fall 5 meters and you have to resolve 5 meters worth of nonlinear position error. Suppose you did not feed forces into the position update and suppose the initial velocity obeyed the velocity constraint, then the initial velocity would be zero. Then you would have zero position error to resolve. Velocity constraints are linear, so they are easier to solve than position errors.
- Correct me if I'm wrong, but it appears that you do not use a separate velocity state in Step 2. That means that the position projection affects the system's kinetic energy. This also means that you may see reaction impulses in step 4 that do not accurately reflect the correct reaction forces. This would affect the friction strength.

So I propose the following algorithm:
1) Update velocity according to external forces.
2) Project the velocity errors using PGS.
3) Use the new velocity to update positions.
4) Project the position errors using NPGS (nonlinear PGS). This step would not use the velocity state at all.

Notice that step 3 begins with an (approximately) admissible velocity, so the position error can be very small. Then step 4 can terminate iterations early if the position tolerance is not exceeded. Also, step 4 does not affect kinetic energy, so this will likely lead to more stable stacking, etc. Finally, as I have been harping on before, this is a symplectic integrator and will not generate energy at reasonable step sizes.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Post by Dirk Gregorius »

Erin,
what you describe is already implemented in Box2 SuperSplit, right? And what do you mean with ramped Baumgarte factors? Where is this used and how does this help?