Use doubles instead of single precision floats

gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

I've started to modify the code to allow having btScalar be a double, and so far its been straight forward. Besides that and the associated fixes, are you aware of any other things I should watch out for?
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

gunner10 wrote:I've started to modify the code to allow having btScalar be a double, and so far its been straight forward. Besides that and the associated fixes, are you aware of any other things I should watch out for?
Do you plan to make some contribution/patch, or do you make your private fork of Bullet? Will your code still allow switch to float and double, or just double? How do you define constants? Right now, I used things like "1.f", all over the place. Do you change them into btScalar(1.0) ?

Bullet is under quite active development, so perhaps we should synchronize efforts.

Erwin
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

Right now I was doing the code changes as an experiment to see how difficult it might be. We had been looking closely at PhysX but are dismayed at how short sighted it appears to be for double precision and 64 bit support.

I've got the appBasicDemo up and running with the base libraries building with just replacing float with btScalar and a define to specify whether it typedefs a float or double. I also changed the calls to standard functions like atan2 to use the bt versions where that wasn't happening. Some of the GL support code will need to be made friendly to either type, i just converted them to double versions for now. I noticed the constant issue you mentioned also, and that should be cleaned up as well.

If you want to point out all the things you feel should be changed if you were doing it, I'll see what I can do to get them all in and contribute back. We still need to do alot of evaluating as to whether Bullet meets our needs overall, but having you help guide the direction/point out where things might be issues would be very favorable.

If appropriate PM me to discuss further details.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

I assume you are interested in double precision support, so you can handle huge coordinates?

We should create a demo/testcase to see if all works, in particular a small stack of objects very far from the origin.

There would be one other approach to support huge coordinates, instead of double support, but this requires some extra work/management:
A multiple-worlds approach, where each world has its own local coordinate system, and a clever approach how to move/merge overlapping simulation islands can solve some issue too. Basically the calculations are done in some arbitrary chosen local space. It would be best to do all collision and physics calculations in Bullet in relative coordinates instead of world coordinates. Such 'relative coordinate approach' has been recently added for Bullet's narrowphase GJK calculations and this improved precision. But other calculations like constraint/contact solving is still done in worldspace. So unless we improve all algorithms, it's probably safer/easier to just use brute-force double precision.

Some epsilons/tolerances might need to be adjusted for the double-precision version. I wonder how large the impact is on the source code, in particular modifying constants from 0.f into btScalar(0.f). Did you already spend some time how other software handles support for both double/float precision just with a switch?

Thanks,
Erwin
If you want to point out all the things you feel should be changed if you were doing it, I'll see what I can do to get them all in and contribute back. We still need to do alot of evaluating as to whether Bullet meets our needs overall, but having you help guide the direction/point out where things might be issues would be very favorable.

If appropriate PM me to discuss further details.
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

yes, I want to be able to simulate a very large world without loss in precision.

Agreed about the test case you suggest.

I'm all about the "brute force" approach, if it gets me there faster, especially if I can put it on beefy hardware.

We'll probably have to come up with the ability to convert from the double precision to a single precision representation like what you are describing for a "client" view into a part to the world, where each client has a local coordinate system derived from the double precision representation - e.g. some offset into the the location in the world, since it only is ever looking at a small subset of the entire world.

The only reference I have is ODE, which looks like it uses a compile time define to initialize the appropriate type, and uses the same define to set epsilons and tolerances.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

gunner10 wrote:yes, I want to be able to simulate a very large world without loss in precision.

Agreed about the test case you suggest.

I'm all about the "brute force" approach, if it gets me there faster, especially if I can put it on beefy hardware.

We'll probably have to come up with the ability to convert from the double precision to a single precision representation like what you are describing for a "client" view into a part to the world, where each client has a local coordinate system derived from the double precision representation - e.g. some offset into the the location in the world, since it only is ever looking at a small subset of the entire world.

The only reference I have is ODE, which looks like it uses a compile time define to initialize the appropriate type, and uses the same define to set epsilons and tolerances.
it looks like ODE and also Solid 3.5 (http://www.dtecta.com) take a similar approach for constants:

ODE: h *= REAL(0.5);
Solid: return *this *= Scalar(1.0) / s;

So in Bullet we should replace things like 0.5f by btScalar(0.5), ok?

And we need a define that can be switched on/off, like 'BT_USE_DOUBLE_PRECISION', preferably at the top of btScalar.h
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

yep, that looks pretty straight forward.

What do you think about defining some well known constants like

const btScalar btScalarZERO = 0.f
const btScalar btScalarONE = 1.f

and using btScalarZERO and btScalarONE in code?
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

gunner10 wrote:yep, that looks pretty straight forward.

What do you think about defining some well known constants like

const btScalar btScalarZERO = 0.f
const btScalar btScalarONE = 1.f

and using btScalarZERO and btScalarONE in code?
I'm tempted, but I prefer to go with btScalar(0.), that is more compatible with ODE and Solid. The compiler should resolve it at compile-time, so its not a performance issue.

Also, I have a bad memory and tend to forget which of the scalars are 'predefined' and which ones are not (0, 1, -1, 2, 0.5, 0.25?). Note that PI and half PI and doube PI are defined already.

Perhaps good to get in touch. I send you a PM, did you get that?
Thanks,
Erwin
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

How would you handle the various versions of the OpenGL calls?

I was thinking of doing something similar to what you have in btScalar.h currently, that calls the float or double version based on a BT_USE_DOUBLE_PRECISION, but where would you put that?

I was thinking GLStuff.h, but perhaps you have a better suggestion?
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

gunner10 wrote:How would you handle the various versions of the OpenGL calls?

I was thinking of doing something similar to what you have in btScalar.h currently, that calls the float or double version based on a BT_USE_DOUBLE_PRECISION, but where would you put that?

I was thinking GLStuff.h, but perhaps you have a better suggestion?
Bullet/Demos/OpenGL/GLStuff.h sounds like a good location, and define some helper, with the f or d at the end, based on BT_USE_DOUBLE_PRECISION.
If you prefer, you can also just add it in each file, there isn't that many OpenGL calls, I think. Too many defines/macros/indirections might obfuscate the code too much.

I don't think we need to worry about fixing the optional Extras folder (yet) for double precision.

If you have some patches for me to look at that would be useful.
Thanks a lot,
Erwin
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

ok, I'll try to get some to you tomorrow (Thursday).

I have a question - in btScalar.h, there is the following code, how should this be handle when BT_USE_DOUBLE_PRECISION is defined? This reads like on these platforms, double is always used for these function calls, which seems strange since currently btScalar is a float.

#if defined (__sun) || defined (__sun__) || defined (__sparc) || defined (__APPLE__)
//use double float precision operation on those platforms for Blender

SIMD_FORCE_INLINE btScalar btSqrt(btScalar x) { return sqrt(x); }
.
.
.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

Some developer (using Bullet inside Blender) told me that they are using an older gcc 3.x compiler on Sun and Apple PPC, and that this doesn't support fabsf, cosf etc.

So this is only for those few built-in math functions, the rest of the code is still single precision floating point for all platforms at the moment. We could make it all consistent, and just force those platforms to have everything double, but that will change behaviour. (define BT_USE_DOUBLE_PRECISION for those platforms). It can be quite cumbersome to deal with all those platform and compiler version issues.

Thanks,
Erwin

gunner10 wrote: I have a question - in btScalar.h, there is the following code, how should this be handle when BT_USE_DOUBLE_PRECISION is defined? This reads like on these platforms, double is always used for these function calls, which seems strange since currently btScalar is a float.

#if defined (__sun) || defined (__sun__) || defined (__sparc) || defined (__APPLE__)
//use double float precision operation on those platforms for Blender

SIMD_FORCE_INLINE btScalar btSqrt(btScalar x) { return sqrt(x); }
.
.
.
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

Sent a mail to your admin account with a zip of proposed changes

Since the switch for precision is a define, its going to be interesting how you want to manage the projects (at least for MSVC) - seems like there needs to be 2 sets of projects so that the float and double versions can be built separately. I know I'll want to have both versions of the libraries available.
gunner10
Posts: 13
Joined: Mon Dec 11, 2006 6:30 pm

Post by gunner10 »

You should have a patch sent to your gmail account now based on the latest in Subversion. Hopefully it works. :D

Let me know.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

gunner10 wrote:You should have a patch sent to your gmail account now based on the latest in Subversion. Hopefully it works. :D

Let me know.
It has been committed and is part of Bullet 2.40.

For now, there are no projectfile changes yet, so developers who want to try it have to either #define BT_USE_DOUBLE_PRECISION in their project, or at the top of LinearMath/btScalar.h

Thanks a lot! did you already make a small stack test -far away from the origin- to see if double precision gives the expected improvement?