Way to Measure "Entropy" in a Simulation State

User avatar
masoug
Posts: 5
Joined: Mon Apr 04, 2011 5:17 pm

Way to Measure "Entropy" in a Simulation State

Post by masoug »

Hi,
I'm trying to synchronize several Bullet physics simulations in an efficient manner across a network (relative to a server; simple networked game). I initially thought of sending game states to all the clients at a set interval (each client runs its own simulation and updates itself every time the server sends the game state). But I realized that there were certain times when the physics simulation will "diverge" (across all clients) more/less depending on what's going on in the simulation. For example, if nothing is moving in the simulation the rate at which the server sends the game state should be smaller. Vice versa, if a lot of things are moving very quickly in the simulation, the update rate should be higher (because I'm guessing that the simulation instances across all the clients will "diverge" more rapidly). So I was wondering if there was already a way to measure the game state's "entropy" to throttle the rate at which the game server updates all the clients?

One method I've been thinking about is to sum the (absolute values) velocities of all elements in the game? Does anyone else have suggestions or ideas?

Thanks!
-Masoug
STTrife
Posts: 109
Joined: Tue May 01, 2012 10:42 am

Re: Way to Measure "Entropy" in a Simulation State

Post by STTrife »

Hi masoug,

Long time ago when I was in a class about virtual reality we had to implement a networked physics server-client using ODE.
They way we did it was: The server runs the full simulation, the client runs local extrapolation, the server ALSO runs a local extrapolation for each client (with extrapolation I mean simulate the physics world but without any collisions, this is very fast and can be done for each client separately on the server as well).
Then the server knows for each client how far (extrapolated) simulated objects are diverting from their 'real' position/velocity and rotation. Then if the difference between the real position/rotation/velocity (as calculated by the physics engine) and the extrapolated object is too large (you can define when it is too large yourself, maybe just base it on position/rotation only if you like) then the server sends updates for those objects only. that way you are not sending unnecessary updates to any clients.

Not sure about the question you asked (if you want to do it your way) I suggest you should think about what parameters cause the most complex updates... probably collisions. So I think you could look at the amount AABB intersections (cause they indicate possible collisions). On the other hand, if there is no input from users, then you might assume that the simulations run the same. So maybe you could also count the amount of object that have changes caused by (other) users input, cause those updates cause discrepancy between the server and client simulations. the more other players are influencing the server world, the more updates you send?
User avatar
masoug
Posts: 5
Joined: Mon Apr 04, 2011 5:17 pm

Re: Way to Measure "Entropy" in a Simulation State

Post by masoug »

Thanks for your helpful suggestions!
STTrife wrote: Not sure about the question you asked (if you want to do it your way) I suggest you should think about what parameters cause the most complex updates... probably collisions. So I think you could look at the amount AABB intersections (cause they indicate possible collisions). On the other hand, if there is no input from users, then you might assume that the simulations run the same. So maybe you could also count the amount of object that have changes caused by (other) users input, cause those updates cause discrepancy between the server and client simulations. the more other players are influencing the server world, the more updates you send?
Maybe I could combine several of those factors together, such as the number of collisions+user input? The technique you mentioned earlier with multiple client interpolations on the server is great, unfortunately I only have about two weeks left to finish this game so that might be a little too time-intensive for me to implement.

I'll experiment (if I have time) with different combinations of the factors you mentioned and see which works the best.

-Masoug
STTrife
Posts: 109
Joined: Tue May 01, 2012 10:42 am

Re: Way to Measure "Entropy" in a Simulation State

Post by STTrife »

I suppose you could combine them in some way, but maybe also take a look at this:

http://www.bulletphysics.org/mediawiki- ... eterminism

If you set up your phsyics world in a deterministic way and you have a steady framerate, you have no reason to assume that the simulations will drift apart quickly, unless either the server or the client cannot keep up the framerate, or if any user influences the simulation (usually by applying forces, or just logging on or off).

So I would suggest: check the framerates on the server and client (if either one is not keeping up you know that they run out of sync fast), and focus on user input. Even if you have only 1 client, then any input that is given cannot be introduced on the server and client on the exact same 'moment' in the simulation, so you will always introduce some discrepancy..
User avatar
masoug
Posts: 5
Joined: Mon Apr 04, 2011 5:17 pm

Re: Way to Measure "Entropy" in a Simulation State

Post by masoug »

Thanks STTrife! I really like the framerate idea, so I think I might take a look at that first. User-input will always cause mis-synchronization, like you mentioned so I'll see if there is a way to have the server detect rapid user input and thus increase the broadcasting rate of the official game state.