Minimum Delta for proceeding 1 step

User avatar
cippyboy
Posts: 36
Joined: Fri Aug 25, 2006 1:00 am
Location: Bucharest

Minimum Delta for proceeding 1 step

Post by cippyboy »

At 1.0f/20.0f or 20 frames per second, objects start to shiver. I see that it can be fixed by increasing the substeps but it turns to be really slow in the end, what is the best option besides limiting the delta to something like 1/30 ?
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

Its best just to stick with a fixed timestep of 60 hertz.

You can decouple the graphics rendering from the physics timestep, by interpolating (based on velocity), to allow variable framerates.

It's probably better to optimize the physics so it runs fine at 60 hertz. How many active objects do you have?
User avatar
cippyboy
Posts: 36
Joined: Fri Aug 25, 2006 1:00 am
Location: Bucharest

Post by cippyboy »

I only got 2 (for the moment of testing) but I tryed the "ccdphysicsdemo" with something like 1/5 and the objects started shivering like little soldiers as if they wore the salvation army :D

EDIT:It remembers me of my old collision detection algo, if I tryed to push into a wall I would jump back and forth until eventually I'd pass the wall and fell over :lol: This was generally because I didn't know where exactly to put the object so the next time it won't collide with the wall, heh.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

are you using 50/60 hertz fixed timestep?

Please modify a bullet sample that demonstrated your problem, but don't use a bigger timestep then 0.02. How does CcdPhysicsDemo perform on you platform? It has 120 objects, and pressing 'd' disabled sleeping (so they are all active).

Thanks,
Erwin
User avatar
SteveBaker
Posts: 127
Joined: Sun Aug 13, 2006 4:41 pm
Location: Cedar Hill, Texas

Post by SteveBaker »

Erwin Coumans wrote:Its best just to stick with a fixed timestep of 60 hertz.

You can decouple the graphics rendering from the physics timestep, by interpolating (based on velocity), to allow variable framerates.

It's probably better to optimize the physics so it runs fine at 60 hertz. How many active objects do you have?
Presumably this advice is talking about some presumed 'normal' range of object sizes and masses. To take an extreme example, if you were simulating something like two galaxies colliding - then a timestep of a million years might be about right.

For objects of 'human scale', I think 1/60th is OK - but when you try to work with smaller, lighter objects, you need smaller steps and for larger, heavier objects a larger timestep should suffice.

...or is there something I misunderstand here?
User avatar
cippyboy
Posts: 36
Joined: Fri Aug 25, 2006 1:00 am
Location: Bucharest

Post by cippyboy »

CcdPhysicsDemo works fine with the constant delta of 1/60, the problem was when I modifyed the delta into 1/5(=0.2, about 10 times of what you said it was bad :D)

I now just give it the time it passed from the last frame, and running at about 200-300 FPS, but when something big happens and drops to like 20 FPS it gives a delta of 0.05, and the objects(a sphere in my case) just flies into the ground so I have to restrict it to the minimum timestep, and I was asking what was that minimum, I use 1/30.

The alternative was to increase the SubSteps but the processor dies.
User avatar
SteveBaker
Posts: 127
Joined: Sun Aug 13, 2006 4:41 pm
Location: Cedar Hill, Texas

Post by SteveBaker »

Ideally, use the same step size every iteration - but with an interactive graphics display, your graphical frame rate may vary up and down and if your program is only iterating at (say) 20Hz then everything will appear to run in slow motion. When your frame rate speeds up (because maybe there is very little to draw) - then everything would be moving fast than it should.

This effect is very upsetting in a game - and should be avoided.

There are several ways to address this:

1) Run the physics with a variable time step so that if (say) 20ms elapsed since your last call to the physics code, then just say that to the physics library. Theoretically, this ought to be OK - but in practice, you definitely get very bad results from updating the physics too infrequently and with some physics packages, the results get flakey when you update it more frequently too. So maybe this is a bad idea.

2) You could decide on a fixed physics frame time (say 16ms), measure how long actually went by, subtract one from the other and accumulate the amount of 'temporal error'. When that gets to be more than 16ms, call the Physics code twice in that frame instead of once. If the error gets to be negative 16ms or more, then skip doing the physics in that iteration. This has two bad effects. One is that objects in the graphics tend to 'jitter' when you either skip the physics or run it twice in one frame. This can be fixed by extrapolating the position of each object that comes out of the physics code forwards in time to when the graphics are expected to finish rendering (typically using a simple velocity-based extrapolator). The second problem is that you can get into the dreaded "Spiral of Death". When the main iteration rate is taking too long, you have to run the physics twice in each frame. However, that makes the frame take even longer - so next time around your frame is even longer. That can force you to do physics THREE times on the following frame....and that can cause your frame time to spiral out of control to the point where your CPU is only doing physics and there are no graphical updates at all! To fix this, simply limit the amount of time that you let the graphics get ahead of the physics and just have things 'jump' when the delay starts to get to the point where you'd have to run the physics three or more times.

3) You could attempt to determine totally fixed timing for everything - and just tune to the worst case you ever find in your application. A lot of video console games do this because they know exactly what hardware they are running on and absolutely every player has the same exact hardware. However, this approach is generally hopeless when you can't test on every single hardware setup you'll ever need to support.

You can also do a mixture of (1) and (2) - where you use variable rates up to some limit - then call the physics more than once.

There is no elegant solution sadly.