Continuous Collision Detection and Interpenetration

Please don't post Bullet support questions here, use the above forums instead.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Continuous Collision Detection and Interpenetration

Post by Dirk Gregorius »

Dear group,

when using discrete collision detection the physic system needs to backup from the illegal state where the objects interpenetrate. Jacobson Projection and Penalty Methods are AFAIK the most popular ones.

(1)
Can a continous collision system like bullet guarantee non-interpenentration or does my dynamic system still need some fallback to handle interpenetration. IIRC K. Erleben mentions in his PhD thesis that there are alway unavoidable interpenetrations.

(2)
If so, what are the best strategies? Is the penalty method really the best method or is it the most popular because of its ease of use? Are there any new strategies?

Regards,


-Dirk
Guest

Post by Guest »

1)
>>Can a continous collision system like bullet guarantee non interpenentration .

A continuous collision detection system can give you a safe time of impact. In combination with some tolerance, this allows for a zero-penetration physics. However, most solvers are not that accurate, and it is preferably from a performance perspective to allow some penetration.
Please note that you can control the exact amount of allowed penetration, which is very different from discrete systems. There you even have the risk of missing collision completely. I recommend reading all Redon's papers in this area.

Don't know about Kenny Erleben, but I think he still needs to be convinced about feasibility of continuous physics :) He is on the forum, perhaps I can invite him to answer ?

2) Bullet has a pair-wise penalty method solver, and the ode quickstep solver, which is successive overrelaxation iterative LCP solver.
A pivoting LCP solver is more stiff, but usually has less performance. Then there is Lemke's method. Stephane Redon has another solving method, called "Gauss? Least Constraints Principle", perhaps he can explain a bit.


There are a few links in http://www.continuousphysics.com/p that you might find useful.

Erwin
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

CCD

Post by Dirk Gregorius »

I invited Kenny. It would be great to discuss some of the iterative solver and their Pros and Cons. Maybe we can move this into another thread. It would be great if Stephan would introduce the Gauss Method. In which of his papers does he introduce this method?

-Dirk
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark
Contact:

Continuous Collision Detection and Non-penetration

Post by kenny »

Here is my thoughts on the topics of continuous collision detection and
non-penetration. In the following I will try to make things a little bit more
black and white in order to enhance some of my view points, of course in the real world things are grey-scale:-)

Let us first discuss things from a pure physics simulation viewpoint:

Firstly, doing physics based simulation means that you have some approximation of the laws of physics. For instance a contact manifold is often represented as a plane (due to a myopic view of the world) at the point of contact. Applying non-penetration constraints to planar contacts, means that objects are prevented from moving beyond the plane at the point of contact. Thus if the surface is of higher order than a plane, then penetrations can occur no matter how small time-steps you take.

Secondly, if the order of your time-stepping scheme is of lesser order than your spatial representation, you may overshoot. Often fixed time-stepping schemes are used in real-time interactive simulation, in order to deliver ``predictable'' performance. This means you are using some variant of a 1. order Explicit/Modified/Implict Euler scheme (possible with 2. order accuracy).

Thirdly, there is the problem of precision and round-off errors. One often deals with these using either thresholds or interval arithmetics.

To summarize:

1) Approximation of laws of physics
2) Approximation of surface representation
3) Discretization of time derivatives (integration of Newton-Euler equations)

If your motion or surface representation is of higher order than any of these approximations, then you can expect that time-integration of your motion may lead to penetration. You can of course choose to use higher order approximations, thereby gaining more accuracy at a performance degradation. However, it will not solve the problem, it will only reduce the number of cases where you get into trouble.

Finally there are two other sources that may lead to penetration, the obvious one being

4) Precision and round-off errors

and a more subtle one being

5) Future un-detected contacts

This have a lot to do with the time-stepping scheme, it is an effect easily seen for explicit fixed time-stepping schemes using large time-steps (what is often used in game animation). It can lead to jittering/jumping if you use error correction by projection or explosive behavior if you correct errors using stabilization. In extreme cases you may even get tunneling and overshooting artifacts.

All these artifacts are un-attractive in real-time animation, where we do not have the luxury of decreasing the time-step size too much. Continuous collision detection is in my opinion a very interesting approach to solve these problems.

Continuous collision detection have as far as I know been around in graphics for fifteen years or so, early work was done by people such as Cameron (Space-Time Bounds, 90) and Hubbard (Hyperbolic Approximations 93), their methods tried to compute a TOI based on a four-dimensional computation. Later Mirtich (94-96) introduced sweeping volumes and TOI computations based on ballistic approximation of motion trajectories. Eberly (00) extended OBB BVHs to handle first point of intersection queries using linear and angular velocities to predict object motion. Schmidl et al. (02) solved for non-intergeneration positions by minimizing the kinetic energy. Their method required solving a large convex QP problem. Bridson et. al. (02) extended AABB BVHs for deformable objects to include proximity queries using the linear velocity of particles to predict their trajectories. Redon and others (00-04) introduced screw motion and recently extended it to articulated figures. There have probably been much more work that I am not aware of. However, the methods divides into two groups:

A) Conservative Advancement
B) Interpolation Methods

Before discussing each of these groups of methods, I would like to make the observation that screw-motion and using linear and angular-velocities are strongly related. The major difference is that using angular velocities allows you to control the pitch of the screw motion. However, consider using these in animation. Here large time steps is the name of the game. Thus, there is a upper bound on the angular velocity, which will lead to temporal aliasing. It simply does not make sense to have an object rotate around itself more than once per frame, because you can not see the difference anyway. Thus I would state that for animation purpose screw-motion is better suited, its cheaper and the end-user can not see the difference from the more accurate velocity approach. However, if you are into accurate simulation, you probably do not want to use screw-motion.

Conservative methods compute a smaller TOI (time of impact), advances time to the TOI, perform collision resolving, and finally recompute TOIs before advancing once again.

1) High speed spinning objects tend to result in small TOI values, thus making the simulator step ahead with small time-steps. Consider using Mirtichs method, if you have a long oblong object, then r_max will be large, so even though the approximation to the angular velocity is small. The ``maximum speed'' can become quite large, resulting in unrealistic small TOIs.

2) Objects settling down to rest, causes the duration between TOIs to go to zero in the limit. At some point your simulator will never get around to actually step time ahead, but keep on recomputing the same TOI value. This problem have been treated by Mirtich who introduced collision envelopes and micro-collisions. As I recall Redon had a similar strategy based on thresholding. However, this is an inherent problem with conservative advancement. The solutions do not really solve the problem, they only but a lower bound on the time-step size. A simulator based on these concepts can in my experience always be made to halt (just make a high enough stack:-).

3) Conservative advancemnt calls for adaptive time-stepping schemes in-order to deal with the n-body problem correctly. Example consider a ball moving towards a wall, the TOI between ball and wall ensures that no tunneling will occur, and a perfect bounce will happen. Now add to the complexity by dropping a box from the top of the wall, such that the box will collide with the ball after the ball bounces back from the wall. To handle this you need to compute a TOI between the box and the ball at the instant the ball bounces back. Using adaptive time-stepping means that you run the risk of stepping ahead with very small time-steps, doing almost nothing. In context of animation, this is pretty much headless and observers are not likely to see all the little detail which occurs say at rates of 1e-2 seconds or lower. Thus, you only need the behavior of
objects to look right at a more coarse scale (> 1e-2 seconds). I believe
interpolation methods are great for this.

The problem of interpolation methods is that you need to decide what kind of interpolation that fits your needs. As described above I think screw-motion is best suited for animation purposes.

One problem with interpolation methods though is that things get a bit strange if objects are penetrating at the starting and ending times. In that case how would one device an interpolation of the objects that result in non-penetration? The problem may sound contrived, but it is not. In game-like worlds users are allowed to add and manipulate objects in unphysical ways. Thus, creating a box half-way inside another box is quite common.

But how do interpolation methods compare to what we do in ``dynamics''? Consider using an implicit time-stepping scheme. This scheme is cable of seeing future contacts, these future contacts appears as an extra term which is added to the right hand side of a velocity based complementarity formulation. In other words they act as a kind of TOI estimate. Sauer and Sch?mer did something like this a few years ago. The implicit scheme will guarantee that none of these future contacts are violated (actually the input to the implicit scheme is the object states at two discrete instants in time). In my opinion this is reminiscent of
interpolation methods. In practice objects appear to have a repel-envelope. If non-zero restitution is used objects will bounce at the future contacts, their altered motion may lead to new penetrations at other undetected contacts. To remedy these problems people either make their method fully implicit using a fixed-point time-stepping scheme or they use adaptive time-stepping. In real-time interactive animation both choices are undesirable due to performance. In conclusion, continuous collision detection in terms of interpolation methods already exist in the world of dynamics, all though the n-body problem is awkward. As a side-comment adding a TOI-term to the right-hand side of a complementarity formulation increases the order of convergence, but it does not solve the problems.

To end this post I will like to make some comments on what is typically done in real-world engines. Here velocities and forces are typically clamped with an upper bound. The benefits are often twofold: You can control the maximum possible penetration and you can suppress some of the sources to numerical problems. Often several layers of envelopes are applied. One small envelope where penetrations are dealt with using stabilization (i.e. penalty forces), another depper and larger envelope where penetrations are dealt with using projection. It's like an onion, where each layer have its own method for dealing with penetrations. Also some engines are run at an internal lower rate (typically 1e-2 seconds) than the frame rate. Thus keeping the time-step size down and ensuring bodies do not move too much in between a time-step. The idea of continuous collision detection in the game animation world is as far as I
know rather new. I only know of Havok and Novodex who do it right now. I have absolute no idea what they do in Novodex. Most of the demo's seem tailored for 2-body TOIs, but I am making wild guesses here, ha ha:-)

/Kenny
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

>>Conservative methods compute a smaller TOI (time of impact), advances time to the TOI, perform collision resolving, and finally recompute TOIs before advancing once again.

The conservative advancement method can do iterations internally, without performing collision resolving. So it can exactly compute the time of impact, within any epsilon. However making the epsilon too small results in many iterations. Also indeed, the angular term can make the number of iterations grow considerably.

Also I recommend Time Warp for time stepping, in case time of impact is used. It breaks the dependencies, and allows for parallel implementation.

As Kenny describes, introducing an envelope can help a lot for stacking objects. Either envelope or a maximum 'allowed' penetration depth.
Stephane Redon
Posts: 14
Joined: Mon Jun 27, 2005 12:10 pm
Location: INRIA
Contact:

Re: Continuous Collision Detection and Non-penetration

Post by Stephane Redon »

kenny wrote: Continuous collision detection have as far as I know been around in graphics for fifteen years or so, early work was done by people such as Cameron (Space-Time Bounds, 90) and Hubbard (Hyperbolic Approximations 93), their methods tried to compute a TOI based on a four-dimensional computation. Later Mirtich (94-96) introduced sweeping volumes and TOI computations based on ballistic approximation of motion trajectories. Eberly (00) extended OBB BVHs to handle first point of intersection queries using linear and angular velocities to predict object motion. Schmidl et al. (02) solved for non-intergeneration positions by minimizing the kinetic energy. Their method required solving a large convex QP problem. Bridson et. al. (02) extended AABB BVHs for deformable objects to include proximity queries using the linear velocity of particles to predict their trajectories. Redon and others (00-04) introduced screw motion and recently extended it to articulated figures.
Hey all,

my two cents on that :-). If "continuous collision detection" means ensuring that no collision will be missed, and the contact time will be computed, then some references above might not be exactly continuous: if I understand correctly, Dave Eberly's implementation uses an interative method similar to using a fine discretization of the time interval. This can lead to collision misses. Another problem (again, if I understand correctly, it would be nice to have Dave's reply :-)), is that a differential equation has to be solved with an iterative method, which can lead to differences between the motion used for collision detection and the motion used to update the objects' positions. Regarding Harald Schmidl's (very good) method, it indeed allows to remove interpenetration thanks to a QP, but only once future contacts have been detected (as mentioned by Kenny), which might not make it a truly continuous method (again, maybe a definition problem?).

Also, although I agree that screw motions are probably a very efficient way to represent motions, since fewer parameters are required when the rotation axis and translation axis are aligned, it seems they might create problems for some types of fast motions. For example, say a shuriken in a game is moving fast, with a rotation axis orthogonal to its translational velocity. This cannot be represented by a screw motion, because the rotation and translation axes are not equal. The screw motion which would have the same initial and final positions would make the shuriken go out of his linear path (the center of gravity would not move on a straight line). Figure 7 in http://www.inrialpes.fr/i3d/people/redo ... se2004.pdf shows the problem. Of course you might not care about the rotation of the shuriken in a game.
Stephane Redon
Posts: 14
Joined: Mon Jun 27, 2005 12:10 pm
Location: INRIA
Contact:

Gauss' least contraint principle

Post by Stephane Redon »

Gauss' principle basically states that, in a frictionless system, the constrained accelerations of a system of rigid bodies are the closest possible to their unconstrained accelerations. "Closest" means in the sense of the kinetic distance, a distance between accelerations weighted by the masses and inertia tensors of the objects, and "possible" means "among the accelerations that satisfy the constraints". Basically the projection of a (generalized) n-dimensional motion on a set of admissible n-dimensional motions.

This is an (in my opinion :-)) extremely elegant way to formulate frictionless constrained dynamics problems. It is mathematically equivalent to the LCP approach, which formulates the problem in the contact space (since the unknowns are the contact forces and accelerations), but might have some algorithmic advantages.

In my implementation, Gauss' principle is used to compute the constrained motion at the time of impact (or at the beginning of the timestep, when there has been no new impact, but there is a resting contact).
Erwin Coumans

Post by Erwin Coumans »

in answer to Kenny Erlebens 3 problems... In fact, I think the problem reduces to the same in every point: how to handle small toi's.

>>1) High speed spinning objects [...] The ``maximum speed'' can become quite large, resulting in unrealistic small TOIs.

Not true in my opinion. the TOI can still be correctly calculated. However more internal iterations are needed, so the calculation is more expensive.

>>2) A simulator based on these concepts can in my experience always be made to halt (just make a high enough stack:-).

Don't agree. Basically the error of the solver has to be within the envelope/tolerance. A pivoting solver should bound the error properly. If your solver is an iterative solver all you need to to for a higher stack is increase the number of iterations, or increase the quality another way (preconditioning). So it just becomes a bit slower, but hey, you wanted higher stack. Just add a multiprocessor machine to the task. Another solution, which DOOM3 uses is to never integrate the position into penetration. This leads to visible delays, but doesn't cause a halt.

>>3) Conservative advancemnt calls for adaptive time-stepping schemes in-order to deal with the n-body problem correctly. [...] very small time steps, doing almost nothing.

This can be solved: Time warp splits the groups, so it is not a global dependency problem. Then again, for the small time steps, there is the envelope/tolerance which can be tweaked.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Post by Erwin Coumans »

So Stephane, did you deal with stacking in combination with continuous collision detection (avoiding missing collisions) ?
How did you deal with inaccuracies of the solver. Kenny states that creating a higher stack, will eventually create overlap bigger then the envelope. I recall you used a hard 'repositioning' when overlap is too big ?

Might this be similar to Kenny's shock propagation ?
Stephane Redon
Posts: 14
Joined: Mon Jun 27, 2005 12:10 pm
Location: INRIA
Contact:

Post by Stephane Redon »

I do not have a specific strategy with regard to stacking. However there is indeed a repositioning strategy similar to the one proposed by Baraff, which might help. Although the conservativeness of interval arithmetic ensures (if correctly implemented) that there won't be interpenetrations, it might be a problem to let the objects approach too close to each other, because the interpolatory motion used for collision detection might not satisfy the constraints over the timestep. Thus, objects are maintained slightly distant from each other: whenever the distance between two objects is smaller than some threshold er (easy to track once a contact point has been detected), the algorithm enters a "repositioning cycle" which slightly moves the objects in order to (attempt to) reach a separation distance of es>er. This is done with the algorithm used in the constrained dynamics solver. Then, when the constrained accelerations are computed, this security distance is included in the constraints to help maintain the small distance between contacting objects. This way, collisions between two objects that are already in contact occur much less frequently (this would happen for several reasons mentioned by Kenny: linearization of the constraints, discrete integration of the dynamics equations...). I believe this might help for stacking objects: when an object collides a stack of objects, the small security distance enforced between pairs of objects that are already in contact helps avoid new collisions which would have small TOIs.
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark
Contact:

Re: Continuous Collision Detection and Non-penetration

Post by kenny »

Stephane Redon wrote: If "continuous collision detection" means ensuring that no collision will be missed, and the contact time will be computed, then some references above might not be exactly continuous... snip...again, maybe a definition problem?
Good point; I was working with the definition: Given two instances in time (starting and ending frames) what goes on inbetween? In that sense the cited people in my previous post all did work related to that question.
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark
Contact:

Re: Gauss' least contraint principle

Post by kenny »

Stephane Redon wrote: This is an (in my opinion :-)) extremely elegant way to formulate frictionless constrained dynamics problems. It is mathematically equivalent to the LCP approach, which formulates the problem in the contact space (since the unknowns are the contact forces and accelerations), but might have some algorithmic advantages.
I agree, a few comments though:

Interactive worlds need friction.

As I recall the formulation is done in object space, which may be advantageous since the number of objects is likely to be much smaller than the number of contact points. However you need to run collision detection to detect the contacts in the first place, so the overall performance (I believe) of such a simulator is going to be dominated by the number of contact points anyway.

On the other hand in most configurations (in my experience) the number of contact points is often some constant times the number of objects (10 n, works great for me when I pre-allocate storage for contacts:-)
kenny
Posts: 32
Joined: Thu Jun 30, 2005 8:49 am
Location: Denmark
Contact:

Post by kenny »

Erwin Coumans wrote: Not true in my opinion. the TOI can still be correctly calculated. However more internal iterations are needed, so the calculation is more expensive.
Sure you can make more internal iterations. However, you need to control how many iterations you are willing to take. You have the same problems and tradeoffs as in every iterative root search algorithm.

So, yes you can get a accurate TOI within some pre-scribed epsilon, but can you make a guarantee to always deliver the result within say 5 iterations?

Another problem with taking internal iterations is that this may be a source to parameter tuning. I am a strong beliver in trying to create out-of-the-box simulators. In the sense they work and deliver high-performance no matter what you throw in their face. As an example many physics game-engines are known for requiring people to manually tweak mass-ratios and external forces, in order to not cause blow-up, and do I need to say anything about penalty methods:-)
Erwin Coumans wrote: Don't agree. Basically the error of the solver has to be within the envelope/tolerance. A pivoting solver should bound the error properly. If your solver is an iterative solver all you need to to for a higher stack is increase the number of iterations, or increase the quality another way (preconditioning). So it just becomes a bit slower, but hey, you wanted higher stack. Just add a multiprocessor machine to the task. Another solution, which DOOM3 uses is to never integrate the position into penetration. This leads to visible delays, but doesn't cause a halt.
It seems to me that you are making the assumption that we are talking about constraint based simulation. I am not, I was talking generally about how TOIs behave.

In a high stack boxes at the bottom tend to be squezed more closely together (smaller separation) than in the top. This means separation distances go to zero while the frequency of impulse trains go up. Of course in the limit such contacts are ``resting'' (static) contacts and it make sense to deal with these using constraint based simulation.

So, I guess the question here is how to combine a constraint-based simulation and a conservative advancement scheme based on TOIs. I assume that you would ignore TOIs from static constacts and only consider dynamic (colliding) contacts.

The problem with this, as I see it, is how do you determine what contacts are static? Of course you can exploit the results from you constraint solver to do this, but please elaborate on this:-)

Regarding your comments on performance and quality. Iterative methods based on matrix splitting all have linear convergence. This sucks big time moreover you can not predict the actual convergence speed, since it varies depending on your configuration. Besides, finding a preconditioner for a matrix-splitting scheme is difficult (I have tried for some time:-) especially since methods such as Gauss-Seidel and Jacobi are preconditioners themselves (lookup residual error correction methods). To summarize, using something like 10e6 iterations will not be good enough, and no preconditioner really exist.

Direct methods (i.e. pivoting methods) tend to have quadratic convergence and they are as such very attractive. However the cost of each iteration is often similar to solving a linear system.
Erwin Coumans wrote: This can be solved: Time warp splits the groups, so it is not a global dependency problem. Then again, for the small time steps, there is the envelope/tolerance which can be tweaked.
Yeah you can break dependencies between independent groups. And yes time-warp seems to be a very good idea for conservative advancement. However, I want to do simulations with 10.000 objects all being in mutally dependent contact. In such a context time-warp will just give you a big book-keeping overhead.

To summarize:

Conservative advancement based on TOIs and constraint based simulation may solve the problems Mirtich had with impulse-based simulation in this context. However I would like some details on such a scheme:-)

I have a hard time seeing how accurate TOIs can be computed while at the same time keeping a bound on performance. If an epsilon solution is used I have some difficulty in seeing that this would not end up in endless parameter tuning.
Stephane Redon
Posts: 14
Joined: Mon Jun 27, 2005 12:10 pm
Location: INRIA
Contact:

Re: Gauss' least contraint principle

Post by Stephane Redon »

On the other hand in most configurations (in my experience) the number of contact points is often some constant times the number of objects (10 n, works great for me when I pre-allocate storage for contacts:-)
Yes, plus using continuous collision detection typically reduces the number of contact points greatly, because you maintain the objects slightly separated, and because at most 6 contacts points can be independent between any two objects (since there are six degrees of relative freedom) :-)
Mark Wayland
Posts: 5
Joined: Tue Jun 28, 2005 3:05 am
Location: Torus Games

Re: Continuous Collision Detection and Non-penetration

Post by Mark Wayland »

kenny wrote:The idea of continuous collision detection in the game animation world is as far as I know rather new. I only know of Havok and Novodex who do it right now.
Whilst I'm no physics expert, I can say that we used continuous collision in Carmageddon TDR2000 which was started in 1998 - I'm not sure if that ranks as 'new' or not? Actually, we used it prior to that in a little known Australian game called 'Dick Johnson's Touring Car Challenge'. I think Carmageddon 2 (not developed by us) used continuous collision too, but I can't be sure - so they pre-date our use of it.

In our implementation we didn't model rotation between timesteps and the collision sweeping was mostly linear. The solver used Gaussian elimination and was impulse based, and certainly worked well enough for us at the time. In all honestly, I don't know much more than this, as I wasn't the physics/collision guy (I did the renderer).

FYI, I know that TrueAxis physics middleware uses continuous collision, and the guy that did our Carmageddon physics and collision is the brains behind it.

Cheers,
Mark
Post Reply