Great Physics Engine Comparison (PEEL)

Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Re: Great Physics Engine Comparison (PEEL)

Post by Dirk Gregorius »

I think this discussion is going into the wrong direction. It is not about whether engine X is better than engine Y. In my opinion it would help physics customers in their decision making if there would be a way to compare physics engines. Comparing engines makes only sense if there is a specific context. E.g. one game will heavily rely on some form of casts, the other wants some cheap destruction effects, etc. So in order for this to work it should be open source. If it is open source you can even compare the engines providing test scenes that match your game requirements the best.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Great Physics Engine Comparison (PEEL)

Post by Erwin Coumans »

I agree Dirk. Such PEEL thing should be open source, so you can verify it is doing the right thing, and possibly even adding your own tests that are important to you.

The choice of tests should not be made by a single physics engine manufacturer :) It is easy to pick your own results to make a particular engine look better than the other.
For example if I had the task of showing that Bullet is faster than PhysX 2.8, I can do that: simply show a bunch of cylinders stacking (for a coin game). PhysX would have to revert to approximating a cylinder by a convex hull using SAT, which will likely be slower than Bullet's GJK+EPA+PCM. Another example would be a convex sweep test, using a complex convex shape against another convex shape. I suppose Pierre's PEEL framework doesn't include such tests.

By the way, I am not worried too much that the PhysX 3.3+ performance is better than Bullet 2.x or 3.x. I focus on open source physics engine development, with a permissive license. An engineer can take the source and improve the performance, replacing parts he doesn't like etc. That way, he becomes very familiar with the engine, and once enough parts are replaced or improved, he can call it his "own" in-house engine. And such engine might or might not outperform PhysX.

Having a physics engine comparison framework helps both customers of proprietary engines as well as customers/engineers who like to hack away on in-house or open source engines.
So thanks to Pierre for making the results etc available!
Pierre
Posts: 67
Joined: Mon Jul 25, 2005 8:56 am

Re: Great Physics Engine Comparison (PEEL)

Post by Pierre »

There was already an open source comparison engine though.... PAL. I'm not sure why my version of it changes anything. Every game developer should already have a similar tool. After all they need to choose one engine or another for their game based on some data, right? In fact, Dirk, I got motivated for writing PEEL after you sent me your own testbed for comparing physics engines :)
It is easy to pick your own results to make a particular engine look better than the other.
Certainly, that's called cherry-picking, and it plagues research papers and the pharmaceutical industry, among other things. This is not what happened here though.

I do have tests created specifically for features that PhysX did not support, for example Havok's phantom objects. This was done to evaluate how much of an edge it gives to the competition, and decide if we need to implement something similar or not.
For example if I had the task of showing that Bullet is faster than PhysX 2.8, I can do that
My own report shows that this is the case in some scenes. So I really don't understand why you wrote that, Erwin. I never denied this. It's clearly shown in the tests. In any case comparing the latest version of Bullet to the old version of PhysX may not be very useful, so I'm doubly puzzled by this comment.
I suppose Pierre's PEEL framework doesn't include such tests.
If you look at the Bullet plugin source code that I posted, you will notice that there is a "cap" for convex sweeps, yes. But the wrapper for this has not been implemented yet, neither in Bullet nor in PhysX. I fully expect Bullet to be better than 2.8 for convex sweeps, since 2.8 does not even support convex sweeps! :) I would bet good money that 3.x is faster though.

I do agree that there is little to be "worried" about here. The truth is, people in actual game companies working on actual games (like Dirk) already have their benchmarks, and already know these results. The posts were specifically targeted at clueless morons on the internet.

At the end of the day most people will choose the open source solution just because it's open source, period. And the others will base their decision on their own tests, not on mine. (Or because of the API, the docs, what support you get, what platforms are supported, internal politics, or who knows how many other reasons. My posts don't change any of that.)
Basroil
Posts: 463
Joined: Fri Nov 30, 2012 4:50 am

Re: Great Physics Engine Comparison (PEEL)

Post by Basroil »

Pierre wrote: I do agree that there is little to be "worried" about here. The truth is, people in actual game companies working on actual games (like Dirk) already have their benchmarks, and already know these results. The posts were specifically targeted at clueless morons on the internet.

At the end of the day most people will choose the open source solution just because it's open source, period. And the others will base their decision on their own tests, not on mine. (Or because of the API, the docs, what support you get, what platforms are supported, internal politics, or who knows how many other reasons. My posts don't change any of that.)
Some of us in the academic side (especially in robotics, where very expensive programs aren't significantly better than ODE with customizations, and in fact most just use ODE or Bullet) have neither the time nor resources to implement the same simulations in a dozen engines and releases. Open source vs closed source is nowhere near as important as being able to understand how the engine works and how to fix problems (hence why you'll never see havok used, just too many restrictions, PhysX isn't particularly great either, though it does offer some callback modifications now). I am sure that many outside my field will agree that proper benchmarks can minimize wasted time, and rather than experimenting with the surface of several engines we can get to work actually optimizing the simulations we run.

One of the things I noticed in your tests is that there is quite a significant difference in positions between the Bullet and PhysX in some tests, especially the convex stack and joint chains (both things I use regularly, perhaps even exclusively making simulations that have those). How physically accurate are each of the results? How fast must the simulation be run to achieve a specific accuracy?

Perhaps a motorized joint chain can be used to test accuracy, a simple three joint arm rotating 90 degrees until extended out using a PID controller with a known physical result. It would be interesting to see if the speed enhancements in PhysX (and Bullet) have come at a cost of physical accuracy. Some of the papers out there don't look too good for PhysX, but perhaps 3.X or better settings can minimize the errors.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Great Physics Engine Comparison (PEEL)

Post by Erwin Coumans »

Pierre wrote: Certainly, that's called cherry-picking, and it plagues research papers and the pharmaceutical industry, among other things. This is not what happened here though.
Some results are likely genuine, but you cherry picked certain results of your benchmarking tests. Let me quote you:
Pierre wrote: The results are quite revealing. And yes, the numbers are correct.

For this single sweep test, PhysX 3.3 is:

60X faster than PhysX 3.2
270X faster than PhysX 2.8.4
317X faster than Bullet

Spectacular, isn’t it?
Indeed, it is a spectacular cherry pick :)

Whenever one engine is two orders of magnitude faster than another, there is likely something fishy going on. You know that Pierre.
For example, when you perform a sweep test through the world, and find out that some engine becomes really slow, there is likely some culling missing that can be trivially fixed. If you give me the chance to run your particular benchmark myself, I'm sure I can fix some of those issues for you. Until then, I can only guess the reason(s) for the performance difference. My guesses are:

1) one engine performs a convex sweep against every single object against the world, because some culling is not effective. This is likely easily fixed so it is not very spectacular or important to point out, unless you want to use your benchmark for advertising purposes.

2) one engine performs a sweep test that handles both translation and rotation, while the other engine only deals with translation. Obviously solving a simpler problem is going to be much faster. Again, if you are only interested in a specific case, you can optimize for that.

So while I believe that you did a good job in optimizing PhysX 3.x, making it a few times faster than PhysX 2.x and Bullet 2.x. But when you report that your engine is a 2 or more orders of magnitude faster than PhysX 2.x or Bullet 2.x, I want to see a proper analysis for the reasons.
Pierre
Posts: 67
Joined: Mon Jul 25, 2005 8:56 am

Re: Great Physics Engine Comparison (PEEL)

Post by Pierre »

Ok.... I'm not following.

The test you mention is not "cherry picked". Cherry picking would be publishing results for just the sweep tests where PhysX is faster, conveniently ignoring all the other ones. But this is not at all what happened. PhysX was faster in all the raycast or sweep tests that I tried. Why should I ignore the most dramatic results?

I really don't understand your point. It does not matter that X or Y can be "trivially fixed". Everything can. What's the point of benchmarking at all if you ignore all the problems that "can" be fixed in a future version? Of course it can be fixed. And maybe it will. It does not change the fact that contrary to what I read online several times, the current version of Bullet is not "faster on the CPU". This is exactly the reality check I wanted to provide, and spectacular results are a good way to drive the point home.

As for the "proper analysis", I'm sorry but... you can't be serious. I don't have time to do a "proper analysis" for all the issues in all the engines supported by PEEL. Are you really saying that the results are somehow questionable because I did not do a proper analysis of why it was slow in Bullet? This is a bit of an unbelievable statement. There are results. You are free to replicate the test and investigate the issue yourself, if you feel there is something fishy that should be fixed.
one engine performs a sweep test that handles both translation and rotation
Fair enough. Is Bullet supporting rotations when the given transforms are just for a regular linear cast? That could be one explanation, certainly. But even so, I still don't see why it invalidates the results. Most people use linear casts only, so it is fair to ask how much of a performance hit you get for supporting rotations. If it does make everything orders of magnitude slower, well, people should be aware of this. It would make the decision of supporting rotations very questionable to me, for example.


In any case I agree that this discussion is not going in the right direction. Going back to coding now.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Great Physics Engine Comparison (PEEL)

Post by Erwin Coumans »

All I am saying is that several of your most dramatic results can be fixed by the user, without changing the SDK.

I have showed you this for one test (the 255*255 fixed boxes), but you have been quiet about that. I am pretty sure that the wost-case sweep benchmark can also be fixed, without changing the SDK.

At best you have showed me that you cannot use the Bullet SDK properly :) You could complain that the documentation is bad or lacking, and I agree with that.
Thanks,
Erwin
Pierre
Posts: 67
Joined: Mon Jul 25, 2005 8:56 am

Re: Great Physics Engine Comparison (PEEL)

Post by Pierre »

I have showed you this for one test (the 255*255 fixed boxes), but you have been quiet about that.
At best you have showed me that you cannot use the Bullet SDK properly
I reported in this thread that your suggested fix did not work. People just can't even read - like the guy telling me to try Havok.

You have the code. Please tell me how to fix the sweep test scene "without changing the SDK". As I said, I would happily update the blog posts with new results, if it turns out I misused the SDK.
Pierre
Posts: 67
Joined: Mon Jul 25, 2005 8:56 am

Re: Great Physics Engine Comparison (PEEL)

Post by Pierre »

Ok, I found the issue. This line is guilty:

Code: Select all

body->setActivationState(DISABLE_DEACTIVATION);
As written in the posts, I disable sleeping (in all engines) to make sure it does not interfere with benchmarks. Well, if I do that, the setForceUpdateAllAabbs(false) call has no effect anymore. Is there a better way to disable sleeping for benchmark purpose?

If I do not call setActivationState, the scene does run faster. However it still takes about 5ms here at work, on a PC which is usually about 2X faster than what I have at home (where the 34ms were recorded). I will try at home later and report the new number (*). It will likely be better than 34ms. We probably agree though that 5ms is still "too much" for simulating an empty scene.

In any case I am not sure why the sleeping stuff has an impact on static objects. That sounds counter intuitive.

(*) EDIT: it takes 16ms now at home (instead of 34), with sleeping enabled and AABB updates disabled. This is certainly better, but the problem is basically still here.
Dirk Gregorius
Posts: 861
Joined: Sun Jul 03, 2005 4:06 pm
Location: Kirkland, WA

Re: Great Physics Engine Comparison (PEEL)

Post by Dirk Gregorius »

I am not sure. I remember there was some global constant you can set to false. I would check in the sleeping logic in btRigidBody and see if you can find it there. Also check the benchmark tests in Bullet. I think Erwin disables sleeping there as well and this might be the best way. The way I am describing here might be deprecated though.
Nathanael
Posts: 78
Joined: Mon Nov 13, 2006 1:44 am

Re: Great Physics Engine Comparison (PEEL)

Post by Nathanael »

I for one is really happy that Pierre published his results, regardless of supposed biases, there was a time not so long ago where posting on the Bullet forum lead to enlightening discussions instead of flame wars.

This should be a technical discussion, because the reality is, most real time physics users are happy with ten crates and two rag-dolls, in which case their concern is API / framework , stability, tool support, etc..., not performances.

In any event, take my word for it, PhysX is competitive and it is getting better to the point where more and more of my internal benchmarks includes a PhysX data point in order to judge performances, this was not the case few years ago (before Pierre's code kicked-in I would guess:)).

Nat.
RBD
Posts: 141
Joined: Tue Sep 16, 2008 11:31 am

Re: Great Physics Engine Comparison (PEEL)

Post by RBD »

Nathanael wrote:there was a time not so long ago where posting on the Bullet forum lead to enlightening discussions instead of flame wars.
Contributing knowledge to advance the field and help the community is one thing; creating a blog post to put down an open source engine in order to prop up your own closed source one because you are "pissed off" at the "clueless morons on the internet" is another. The later is bound to attract some flack.

I agree all should be concerned about advancing the field instead of flaming, yet when one starts, as Pierre did, because of their passion, it's normal to expect others to weight in; I hope you realize you also did just that now...
quantum
Posts: 6
Joined: Sun Sep 02, 2007 4:26 pm

Re: Great Physics Engine Comparison (PEEL)

Post by quantum »

I would have been more impressed with the benchmark had it been posted objectively and dispassionately. Instead the grandstanding lingo, "spectacular", "embarrassing", etc. really clouds the whole thing for me. Additionally, the author posts in this thread with more off-putting comments, which only adds to the negative impression.

On the positive side, the benchmark looks well engineered at first glance, and I hope it will be open sourced (or duplicated). For those of us concerned that Erwin and the other contributors might have run out of things to work on, a good benchmark can give them some more targets to work on :)

I feel compelled to add that I don't think the benchmark means that much. It reminds me of the various Javascript browser benchmarks which mean little because most people are not running javascript to calculate fibonacci numbers in the browser. For my physics library use, my own case does not involve running 1k simultaneous sweeps or raycasts or whatever. In fact, the point that Bullet is mostly in the ballpark in terms of performance, and fully functional, and open source with a liberal license, makes it an easy choice for me (and a lot of others apparently).
Basroil
Posts: 463
Joined: Fri Nov 30, 2012 4:50 am

Re: Great Physics Engine Comparison (PEEL)

Post by Basroil »

quantum wrote: On the positive side, the benchmark looks well engineered at first glance, and I hope it will be open sourced (or duplicated).
I can't agree 100% with that one, since you can see some very large deflections in the convex stack with physx that aren't in bullet, as well as joint results that vary drastically between tests. Unless they are willing to rerun simulations to a certain accuracy (i.e. meshes don't sink into each other when stacking, joint angles don't vary drastically between tests, etc) or open source it so others can do it for them, there isn't too much useful information (as you mentioned).
quantum wrote: For those of us concerned that Erwin and the other contributors might have run out of things to work on, a good benchmark can give them some more targets to work on :)
All for that! Hopefully they both throw the academic side a bone with featherstone or another more robust solving method, that's practically the only thing keeping bullet from overtaking ODE within ROS, and what's keeping physX inside the microsoft robot simulator box. I know a dozen labs that would love to have it, and probably a hundred times that would likely use it rather than making their own.
pildanovak
Posts: 50
Joined: Thu Jul 14, 2005 1:55 pm
Contact:

Re: Great Physics Engine Comparison (PEEL)

Post by pildanovak »

Not taking into account the emotional stuff in this thread,
I can mention that I struggle with the sweep test performance in Bullet still, and don't know how to optimize it. I did test in Blender(bullet) vs Unity(PhysX), testing a sphere against a highpoly mesh(unity supports max 65k meshes). Unity performed better, much better, although definitely not 300x faster, maybe about 5x-10x. I also noticed sweeping rotated primitives(e.g. cylinder)in blender is much slower than when the body is unrotated and swept just on z axis.

I know bullet has great performance, I've seen the results many times. Also, Physx(in Unity) doesn't support so many basic shapes, which I need. So I am still thinking what might be used to optimize the results for sweep test?
Post Reply