Position in double precision?

Ronin
Posts: 15
Joined: Fri Oct 06, 2006 9:36 am

Position in double precision?

Post by Ronin »

Hello all,

I'm wondering if there is the possibility of double precision for position values inside bullet. I looked in the forum and in the wiki, but could not find any infos about that.

The fact is, I need very large position values and float provides simply not enough precision for that.

Maybe it is possible that I change the sources for myself, replacing simply every position variable to double, but I don't know if this is possible at all. I could imagine that I need a deeper understanding of what is going on inside the code in order to do that.

Hope that someone can help me on that.

Ronin
User avatar
SteveBaker
Posts: 127
Joined: Sun Aug 13, 2006 4:41 pm
Location: Cedar Hill, Texas

Post by SteveBaker »

Yeah - actually, I hadn't noticed that - but double precision positioning is very important for me too.

If you are working on large areas of operation, the precision of a float drops off alarmingly quickly. A positional accuracy of one part in about 4 million is the best you can hope for with roundoff and such - so if your units are meters then at 4km from the origin you already have a millimeter of error, at 40km you are off by a centimeter which is going to be pretty noticable.

With doubles you can work at distances the size of the entire planet and still have sub-millimeter precision. With 64 bit CPU's rapidly becoming the norm, I doubt it would hurt PC performance to switch over to double precision....dunno about consoles though.

This problem doesn't crop up so much with OpenGL graphics because everything is drawn relative to the eyepoint which is always at the origin. So you get high precision close to the eye and it falls off rapidly with range - but then perspective reduces the size of any errors so you can't see them anymore. But physics isn't like that.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

We could refactor the library to consistently use btScalar, instead of float.

However for the longer term I prefer a scalable solution that works in single precision. The problem can be linked to simulation islands and multiple broadphases. I have enough information to do this, but it's going to take time, and I want to stabilize Bullet again, after the current refactoring.

Shall we plan the scalable solution that works on single-precision architectures for Bullet 3.0, and in the meanwhile check out if we can have a workable solution by optional defining btScalar to double?

Thanks,
Erwin
Ronin
Posts: 15
Joined: Fri Oct 06, 2006 9:36 am

Post by Ronin »

That would be great! :D

Essential for me is that position in float is just way too imprecise, so an optional compile flag for btScalar as double would exactly fit my needs.

I do not really understand what you mean by scalable solution, does that mean you are independant from the size of the simulation area? But then rigid bodies would still have a float position relative to the origin or do they get a position relative to their island? Then again I don't know how you would synchronize this with your graphics engine...

Maybe you could explain shortly how such a scalable solution would work. Thanks a lot...
User avatar
SteveBaker
Posts: 127
Joined: Sun Aug 13, 2006 4:41 pm
Location: Cedar Hill, Texas

Post by SteveBaker »

The problem with having btScalar switchable between float and double is that you'd incur double precision costs throughout practically all of the math in the entire package. In truth, it's only distance and time that really needs doubles. Velocities, accellerations, forces, masses, impulses, torques - all of those things can (and should) continue to be 'float'. Sizes and offset positions can be float too - it's just the absolute positions (and times) that are problematic.
Ronin
Posts: 15
Joined: Fri Oct 06, 2006 9:36 am

Post by Ronin »

Yes, you are right, doing everything in double is an unnecessary loss in cpu cycles.

So would it be possible to introduce a separate data type for absolute positions and times, which can be optional switched to double precision?
How much work would that be and how long (roughly) approximated would it take to implement such a solution?

I'm really looking forward to use this sdk, it seems to be pretty well done. I'm searching for some time now without finding any good designed physics sdk with double presicion positions and yours seems to be exactly what I need. Keep up that good work. :)
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA

Post by Erwin Coumans »

SteveBaker wrote:The problem with having btScalar switchable between float and double is that you'd incur double precision costs throughout practically all of the math in the entire package. In truth, it's only distance and time that really needs doubles. Velocities, accellerations, forces, masses, impulses, torques - all of those things can (and should) continue to be 'float'. Sizes and offset positions can be float too - it's just the absolute positions (and times) that are problematic.
Bullet doesn't work with 'time' only with time differences (timesteps) which are small, so no need for doubles there. Using double precision is not simply effecting the world transform. All algorithms and data that operate using the worldtransform is affected, so it propagates, unless you refactor the software and redesign the algorithms. It is all doable, but it is a matter of priorities.

In a lot of applications, you can group objects that are distant from what you call 'the origin', and shift the origin for that group. So you keep an additional offset for those objects. This bookkeeping can be done outside the sdk, as long as you don't have interaction over a long distance.
Doing this automatically involves changes in the broadphase and island handling (takes development time).

So either shifting the origin for distant objects, or using using btScalar everywhere (and optionally define it as double) are shorter-term workarounds.
User avatar
SteveBaker
Posts: 127
Joined: Sun Aug 13, 2006 4:41 pm
Location: Cedar Hill, Texas

Post by SteveBaker »

Erwin Coumans wrote:Bullet doesn't work with 'time' only with time differences (timesteps) which are small, so no need for doubles there.
That's reasonable. We just need to be alert for absolute times that might creep into the API at a later time. It's another one of those things were people debug their programs over a few hours of game-time - then when you turn it into a persistant world on an always-up server, things get unstable amazingly quickly! But if it's not a problem now, it's unlikely to be in the future.
Using double precision is not simply effecting the world transform. All algorithms and data that operate using the worldtransform is affected, so it propagates, unless you refactor the software and redesign the algorithms. It is all doable, but it is a matter of priorities.
Right - it's not a simple matter to fix - but for those doing outdoor applications, it's a major deal to work around.
In a lot of applications, you can group objects that are distant from what you call 'the origin', and shift the origin for that group. So you keep an additional offset for those objects. This bookkeeping can be done outside the sdk, as long as you don't have interaction over a long distance.
Except that shifting origins tends to cause cached data to be invalidated - or to (potentially) cause subsystems to imagine that the object suddenly moved with enormous speed across the world. In asynchronous multi-processing applications (which is where we're heading with fixed time-step physics and variable time step graphics) - there are all manner of really ugly race conditions that can bite you.

I've been down the 'shifting origins' route before with high level graphics libraries (Silicon Graphic's Performer was the thorn in my side over this one). Dumping that for an approach where object origins were in doubles and the API switched to local origins (and therefore float precision) internally made for a vastly cleaner overall design.

You also have problems with objects moving between groups of local coordinate objects. If I'm driving a simulated vehicle which is interacting with the ground - and I travel between distant groups of other objects, the vehicle has to switch between simulated groups and shift it's origin all the time. For objects that are moving around in general, you may have to merge and split groups of objects that formerly shared a single origin. That's *nasty*.

It's generally a bad principle to have the API export internal problems out to the application.

But I certainly understand the effort this might entail - and the unlikelyhood of it getting done anytime soon. It's important to separate 'What is Right' from 'What we can do'. What is right is double precision positional data - what we can do is not. We need more developers to spread the load. Perhaps the increasing user base and connections with Blender will go some way to attracting more people.
Ronin
Posts: 15
Joined: Fri Oct 06, 2006 9:36 am

Post by Ronin »

Defining btScalar as double would be a totally convenient solution for quite a time, everything else is a subject for later optimizations and not urgent...

I only have the request to have double precision positions and the shorter it takes to implement, the better it is. I saw you already added this to the todo list, which is great. If you maybe could make a rough estimation (based on what priority you give to this task...) how long it will take to implement I'm totally fine with the situation... :)