The declaration of stepSimulation() looks like this:
Code: Select all
virtual int stepSimulation( btScalar timeStep,int maxSubSteps=1, btScalar fixedTimeStep=btScalar(1.)/btScalar(60.));
Note that the last two arguments have default values of 1 and 1/60 respectively. What you're supposed to do is measure the first argument (
timestep) rather than supply it as a constant 1/60 as your example code does, and then let Bullet take substeps as necessary.
That is, suppose your application was running at 29 fps such that timeStep was 1/29. If you call it like this:
Code: Select all
World->stepSimulation(timeStep, 10);
then Bullet would take two substeps (each at 1/60 second). It would only use the max 10 substeps if you supplied a stepTime of 1/6 second.
Hence, when you supply: stepTime = 1/60, maxSubSteps = 10, and the default fixedTimeStep = 1/60 then your simulation only takes one substep since that is how many substeps of 1/60 fit into a full step of 1/60. The reason your simulation would fall behind is not because of the value of maxSubSteps -- that is a red herring -- but because your application frame rate must be something less than 1/60 but you only supply 1/60 to the physics simulation.
The mystery is why you experience a change in behavior when modifying maxSubSteps -- it shouldn't affect anything since Bullet would only use one of them no matter what you set it to.
In short: measure stepTime and set maxSubSteps to 2.
Note that it is OK to supply a stepTime that is smaller than fixedTimeStep because btDiscreteDynamicsWorld will accumulate the timeStep over subsequent steps and then step forward when it can fit in a fixedTimeStep.