My program runs on both Win32/x86 and iOS/ARM. I am getting different collision results from both - the Win32/x86 seems to be more reliable.
On iOS/ARM, Sometimes my sphere will stop on the edge of a cliff on a btBvhTriangleMesh, and the wrong m_index for the contact point's polygon index will be returned.
Is there some reason ARM would be less accurate or produce occasional bogus results compared to x86?
Determinism: iOS/ARM vs. Win32/x86
-
- Posts: 19
- Joined: Sat Aug 18, 2012 2:20 am
- Location: Chennai, India
Re: Determinism: iOS/ARM vs. Win32/x86
Are you testing with same stepsimulations and CollisionConfig settings? We have a game developed for Win32, iOS, Android and Mac. Determinism mostly depends on step simulation parameters.Spaddlewit wrote:My program runs on both Win32/x86 and iOS/ARM. I am getting different collision results from both - the Win32/x86 seems to be more reliable.
On iOS/ARM, Sometimes my sphere will stop on the edge of a cliff on a btBvhTriangleMesh, and the wrong m_index for the contact point's polygon index will be returned.
Is there some reason ARM would be less accurate or produce occasional bogus results compared to x86?
-
- Posts: 28
- Joined: Fri Sep 04, 2009 8:23 pm
Re: Determinism: iOS/ARM vs. Win32/x86
Yes, exactly the same timestep. A lot of times the m_partId* and m_index* will return 0 instead of the uninitialized -8655302 (whatever). Even if I change the bullet code to initialize these variables to -1, it will still return bogus 0 values. I have had to hardcode to ignore index 0 and pad out my meshes with a fake first triangle.