TriplVing's Bullet Vs PhysX Vs ODE performance comparison
Posted: Mon Aug 09, 2010 11:23 pm
* ADMIN split topic, originally started here: http://bulletphysics.org/Bullet/phpBB3/ ... f=9&t=5477
One of my other Vs comparisons has been the processing time taken for each time-step of each engine.
i have tried to make each scenario as simple and therefore as similar as possible for each engine, so for this test I just have a sphere falling, with nothing for it to contact, and simply have a bit of timing code that starts before the simulate call for each engine, and ends directly after. I have setup each engine using as many of its default values as possible, and all are set to step for 1/60th of a second. What I am finding is really surprising me, I was under the impression that ODE was considered the slowest of the engines, with Bullet and PhysX being on par, but in reality, I have found that, for 1000 iterations, taken 3 times, all compiled for 64 bit by the same compiler and run on the same machine with the same background processes the average time-steps are as follows:
Bullet: 0.047273ms
ODE: 0.002644ms
PhysX: 0.125971ms
That's really quite significant!
Why is ODE so much faster than both? I assumed as there were no contacts involved at all that primarily the processing overhead would come from the integration method and implementation in each engine, and I was also under the impression that all three now use a second order symplectic scheme of some kind by default? Certainly the error propagation values I have seen from all three would support that.
If anybody more enlightened as to the inner workings of all 3 engines could provide me with a bit of insight, that would be really good.
P.S: I have tried to "turn off" as much processing as I can for each engine, pairing them down to their very lowest form, so I have ensured in PhysX for example that anything like contact point generation and reporting is turned off. I haven't messed around with multi-threading and have just left all engines at their default values, though I don't believe that ODE or Bullet or PhysX multi-thread by default anyway.
Test setup is as follows:
Sphere of density 1 with a starting position of 0,100,0 (x,y,z) and a radius of 10, all engines "initialised" in their default way, code mainly pulled from each respective engines "hello world" example app, though I imagine as there are no collisions, the choice of pruning tree and broadphase etc shouldn't be as important here?
Gravity is set to 0,-9.81,0, I then have a small bit of timing code that uses the windows timing library, totally standard stuff. I then have a while loop that iterates to 1000 in which i have a call to start my timer, then a call to advance the simulation by 1/60, then a call to stop the timer and finally i write the recorded time out to a file stream.
The machine itself is a laptop, Intel T7700 (dual core 2.4Ghz) with 4GB 667Mhz RAM and an 800Mhz FSB.
The libraries have all been compiled for 64 bit, I am using 2.76 for bullet, 0.11.1 for ODE and PhysX system software 9.10.0224 (64 bit).
The test app is of course compiled for 64 bit, using VS 2010. Each was run 3 times, the values were then averaged for each iteration, and then the values I have posted here are the average of the averages.
I mean, I have access to the PhysX dev code (the 50k license) but it doesn't give me access to the integration methods and being honest the general quality of coding seems relatively high, so I'm really quite surprised to see all three engines producing identical positional values, yet there being speed differences in the range of a whole magnitude or two!
Any thoughts would be great.
One of my other Vs comparisons has been the processing time taken for each time-step of each engine.
i have tried to make each scenario as simple and therefore as similar as possible for each engine, so for this test I just have a sphere falling, with nothing for it to contact, and simply have a bit of timing code that starts before the simulate call for each engine, and ends directly after. I have setup each engine using as many of its default values as possible, and all are set to step for 1/60th of a second. What I am finding is really surprising me, I was under the impression that ODE was considered the slowest of the engines, with Bullet and PhysX being on par, but in reality, I have found that, for 1000 iterations, taken 3 times, all compiled for 64 bit by the same compiler and run on the same machine with the same background processes the average time-steps are as follows:
Bullet: 0.047273ms
ODE: 0.002644ms
PhysX: 0.125971ms
That's really quite significant!
Why is ODE so much faster than both? I assumed as there were no contacts involved at all that primarily the processing overhead would come from the integration method and implementation in each engine, and I was also under the impression that all three now use a second order symplectic scheme of some kind by default? Certainly the error propagation values I have seen from all three would support that.
If anybody more enlightened as to the inner workings of all 3 engines could provide me with a bit of insight, that would be really good.
P.S: I have tried to "turn off" as much processing as I can for each engine, pairing them down to their very lowest form, so I have ensured in PhysX for example that anything like contact point generation and reporting is turned off. I haven't messed around with multi-threading and have just left all engines at their default values, though I don't believe that ODE or Bullet or PhysX multi-thread by default anyway.
Test setup is as follows:
Sphere of density 1 with a starting position of 0,100,0 (x,y,z) and a radius of 10, all engines "initialised" in their default way, code mainly pulled from each respective engines "hello world" example app, though I imagine as there are no collisions, the choice of pruning tree and broadphase etc shouldn't be as important here?
Gravity is set to 0,-9.81,0, I then have a small bit of timing code that uses the windows timing library, totally standard stuff. I then have a while loop that iterates to 1000 in which i have a call to start my timer, then a call to advance the simulation by 1/60, then a call to stop the timer and finally i write the recorded time out to a file stream.
The machine itself is a laptop, Intel T7700 (dual core 2.4Ghz) with 4GB 667Mhz RAM and an 800Mhz FSB.
The libraries have all been compiled for 64 bit, I am using 2.76 for bullet, 0.11.1 for ODE and PhysX system software 9.10.0224 (64 bit).
The test app is of course compiled for 64 bit, using VS 2010. Each was run 3 times, the values were then averaged for each iteration, and then the values I have posted here are the average of the averages.
I mean, I have access to the PhysX dev code (the 50k license) but it doesn't give me access to the integration methods and being honest the general quality of coding seems relatively high, so I'm really quite surprised to see all three engines producing identical positional values, yet there being speed differences in the range of a whole magnitude or two!
Any thoughts would be great.