Hello,
I use Bullet with Panda3D Python bindings, and I would like to know if BulletTriangleMesh (https://docs.panda3d.org/1.10/python/re ... eMeshShape), which is equivalent to btTriangleMeshShape, has built-in optimization for large terrains. If there is a terrain of +50k vertices, will be performance issues ? Especially about hierarchical bounding box optimization, that is will be these triangles split into chunks recursively for optimizations, even if I don't really know how would it be possible?
I ask because I noticed no performance treat even with very (relatively) big mesh, with 1200 Triangles. If there is no optimization and performance issues, is it possible to make a hierarchal split (like an octree or something simplified like it) that will increase the performances?
Thank you
BulletTriangleMesh optimization
-
- Posts: 849
- Joined: Tue Sep 30, 2014 6:03 pm
- Location: San Francisco
Re: BulletTriangleMesh optimization
Yes, the C++ btBvhTriangleMeshShape ctor will build the boundary volume hierarchy (bvh) by default as per the documentation here. This has been true since at least 2018-09-23.
It looks like the panda documentation at the link you provided shows the options to compress (use quantization when building the AABB tree) and bvh (use a bvh tree around the triangles). Both should probably be true execpt (a) for really large objects maybe use compress=false else the AABB's would get sloppy, and (b) for models with only a few triangles (100 or less? dunno what the exact line would be) maybe use bvh=false.
From my experience high triangle count (100k+ triangles) meshes can be performant when the triangles are spread out and the dynamic things colliding against them only ever overlap a few of the triangles at any time. However, if you make a 100k+ triangle mesh and then take a large dynamic object of approximately equal size and collide against the full mesh... well that will trigger many narrow-phase collision checks (on the order of number of triangles) and the stepSimulation() will take a long time. Sweep tests across big portions of such a high triangle mesh will also be costly.
It looks like the panda documentation at the link you provided shows the options to compress (use quantization when building the AABB tree) and bvh (use a bvh tree around the triangles). Both should probably be true execpt (a) for really large objects maybe use compress=false else the AABB's would get sloppy, and (b) for models with only a few triangles (100 or less? dunno what the exact line would be) maybe use bvh=false.
From my experience high triangle count (100k+ triangles) meshes can be performant when the triangles are spread out and the dynamic things colliding against them only ever overlap a few of the triangles at any time. However, if you make a 100k+ triangle mesh and then take a large dynamic object of approximately equal size and collide against the full mesh... well that will trigger many narrow-phase collision checks (on the order of number of triangles) and the stepSimulation() will take a long time. Sweep tests across big portions of such a high triangle mesh will also be costly.