Post Reply
Posts: 5
Joined: Fri Aug 13, 2021 8:45 pm


Post by kmanuele »

I have successfully built Bullet3 in VS2019, the Test_* files seem to run ok in x64 Release mode
Except for Test_LinearMath, where I get a "no SIMD enabled ..." message.
If I set BT_USE_SSE_IN_API in the preprocessor directives, I get a long list of errors.

Any help on what is going on, how to get this test to run, use of SIMD in Bullet3, etc. ??


Posts: 1
Joined: Wed Dec 08, 2021 10:14 am

Re: Test_LinearMath

Post by paulweustink »

You probably need to steer this with the setting BT_USE_DOUBLE_PRECISION
This is from btScalar.h in the LinearMath folder:

Code: Select all

            //Do not turn SSE on for ARM (may want to turn on BT_USE_NEON however)
#elif (defined (_WIN32) && (_MSC_VER) && _MSC_VER >= 1400) && (!defined (BT_USE_DOUBLE_PRECISION))
			#if _MSC_VER>1400
				#define BT_USE_SIMD_VECTOR3

			#define BT_USE_SSE
			#ifdef BT_USE_SSE

#if (_MSC_FULL_VER >= 170050727)//Visual Studio 2012 can compile SSE4/FMA3 (but SSE4/FMA3 is not enabled by default)
			#define BT_ALLOW_SSE4
#endif //(_MSC_FULL_VER >= 160040219)

			//BT_USE_SSE_IN_API is disabled under Windows by default, because 
			//it makes it harder to integrate Bullet into your application under Windows 
			//(structured embedding Bullet structs/classes need to be 16-byte aligned)
			//with relatively little performance gain
			//If you are not embedded Bullet data in your classes, or make sure that you align those classes on 16-byte boundaries
			//you can manually enable this line or set it in the build system for a bit of performance gain (a few percent, dependent on usage)
			//#define BT_USE_SSE_IN_API
			#endif //BT_USE_SSE
			#include <emmintrin.h>
So if I read this correctly the use of DOUBLES (8 byte floats) switches off the SIMD and SSE optimizations, that are meant for 4 byte floats?
Post Reply