Integration of FLUIDS v.2 (SPH Fluids)

rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

I've made a bit of progress on integrating FLUIDS v.2 into Bullet.

Currently, only basic one way interaction is implemented. Fluid particles are run through
a discrete collision check, and are accelerated out of btCollisionObjects if collisions are
detected. Since the detection is discrete, it is highly likely that fluid particles will tunnel
through shapes without volume, such as the btHeightfieldTerrainShape or triangle mesh.


Progress/demo at:
https://github.com/rtrius/Bullet-FLUIDS
(Requires bullet-2.80-rev2531 and Visual C++ 2005/2008/2010)

Changes as of
2012 May 20:
(see git commit log for more recent changes)

-Remove: demo code
-Remove: CUDA support
-Remove: OpenGL function calls, GLUT dependence(in FluidSystem)
-Remove: class GeomX (manager of several arbitrarily sized arrays)
-Remove: class PointSet (particle system)
-Remove: unused files(esp. files in fluids/common)
-Remove: unused variables

-Various bugfixes
Grid cell allocation(use m_Resolution)
Stack overflow(on allocation of FluidSystem)

-Convert coding style towards Bullet
-Separate emitter from FluidSystem
-Reimplement Vector3DF as a subset of btVector3
(not yet replaced due to various issues)

-Add: (unoptimized) marching cubes rendering
-Add: OpenCL port(direct C++ port; not optimized for GPU)
-Add: FluidAbsorber(destroys fluid particles)
-Add: rudimentary Fluid-btCollisionObject interaction(no dynamics)
(collisions are somewhat unstable)


Issues in tracker:
Add SPH fluid interaction with rigid body / cloth simulation
http://code.google.com/p/bullet/issues/detail?id=296

Add SPH fluid iso-surface generation
http://code.google.com/p/bullet/issues/detail?id=297
User avatar
majestik666
Posts: 66
Joined: Tue Mar 02, 2010 6:13 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by majestik666 »

nice work !

Tested and works with NVidia OpenCL & Intel OpenCL too.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by Erwin Coumans »

Interesting. Do you mind uploading some youtube video(s) of your work?

Will it become available under the zlib license as well?
rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

majestik666 wrote:Tested and works with NVidia OpenCL & Intel OpenCL too.
Thanks for testing; how does the Intel OpenCL implementation perform?
Erwin Coumans wrote:Interesting. Do you mind uploading some youtube video(s) of your work?
Will it become available under the zlib license as well?
Yes; all files in the Bullet-FLUIDS repository are licensed under ZLib. Furthermore,
the only portion of code in the repository that did not originate from Bullet 2.80,
FLUIDS v.2, or myself are the marching cubes tables from:

http://paulbourke.net/geometry/polygonise/ , which notes that the tables are from:
http://paulbourke.net/geometry/polygoni ... source.cpp (public domain)

Video of the entire demo:
http://www.youtube.com/watch?v=jJ-rjffeFRY
(There were some recording issues at the very end.)
User avatar
majestik666
Posts: 66
Joined: Tue Mar 02, 2010 6:13 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by majestik666 »

The NVidia one was faster but was getting pretty good
performance with both, can't test right now , but i'll
post some numbers tonight.

Edit :
Just did some quick performance test for the 3 demos,
the machine used is a Core i7 2.8GHz w/ NVidia 285

NVidia OpenCL
1 - 55ms for 11340 particles
2 - 70ms for 17496 particles
3 - 40ms for ~10000 particles

Intel OpenCL
1 - 60ms for 11340 particles
2 - 80ms for 17496 particles
3 - 50ms for ~10000 particles
User avatar
majestik666
Posts: 66
Joined: Tue Mar 02, 2010 6:13 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by majestik666 »

forgot to mention the demo seems to be leaking memory a whole lot !

not really a big deal since it's really just a test but should
keep it in mind
ayoung
Posts: 1
Joined: Thu Sep 08, 2011 11:23 pm

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by ayoung »

Hi rtrius,

I have been working on a similar project myself. http://andrewfsu.blogspot.com I currently have two-way interaction. I haven't integrated with the bullet library yet. I am currently working on integrating my SPH with the Bullet experiments repository. I would like to exploit the new opencl based rigidbody simulator. I also have plans to make a more generalized particle system which allows for rigidbody interaction. What plans do you have for your library? Also, all my code is available on github. https://github.com/ayoung200/EnjaParticles The library currently only works correctly in linux. I believe the main problem with the windows build has to do with struct alignment.

I also face similar problems with boundary conditions. SPH is very sensitive to the spring/damper boundary conditions. I have read many papers and found that several other boundary conditions exist which are more involved(and expensive) but generally more stable. Tweaking your values for the spring and dampening parameters can help but then it becomes more susceptible to "leaks". Some of the more accurate methods actually take into account the pressure of the fluid near the interface.

Here is a video with marching cubes enabled. https://vimeo.com/40670094

Regards,
Andrew Young
rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

majestik666 wrote:forgot to mention the demo seems to be leaking memory a whole lot !
Thanks for the report; fixed in latest commit.
ayoung wrote:What plans do you have for your library?
I'm aiming to make SPH fluids usable with Bullet's main/C++ branch, by implementing
and comparing the performance of various SPH-rigid body interaction methods.

Most of the work so far has focused on refactoring the SPH code; at this point
I have only conducted a shallow investigation of collision techniques.

Some of the interaction methods that I plan to look into:
  • -One way interaction(penalty force), with and without CCD
    -Representing rigid bodies as particles
    -Combinations of the above two, such as using the penalty force method
    for the interior of a cube and particles for the edges/faces
Rvo
Posts: 21
Joined: Fri Nov 01, 2013 1:44 pm

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by Rvo »

Dear rtrius (or someone else ofcourse),

Is it possible to reduce the SPH-fluid scaling? At the moment, the gridsize is 2.5 (meters?) but I am implementing your library with Ogre3D, my Ogre3D mesh is about 2 meters tall and I need small particles, so a much smaller grid. Normally the particles are 1 meter in radius. This I can fix, but the grid size seems dependent on the smoothing radius and simulation scale. I can adapt those, making for example the grid size 0.1 - however this slows down performance drastically, down to maybe 3-4 fps where usually I have 80-90. I can scale the mesh up 100 times to make it look ok, but then the particles can drop 200 meters (spawning from his hands), velocities and gravity have to scale by 100 and the animation/interpolation is a lot less smooth than it would under normal scaling.

I did make the bounding box small, like 5x5x5 but that did not seem to make a difference.

I would be very thankful if you could help me out and give me some pointers as how to make this work?

Regards,
Rvo
rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

In the past, the fluid's bounding box/AABB determined the number of grid
cells but that has been removed in the current implementation. Its only
really there for testing, or if you intend to use the full extent of the grid.
The grid size is by design dependent on the smoothing radius and simulation
scale, as the grid's main purpose is to accelerate the distance calculation
between SPH particles.

The first cause that comes to mind is the fluid-rigid interaction. The current
method tries to take of advantage of both the fluid's grid as well as the rigid
body broadphase. It first converts the entire rigid body's AABB into grid cells
and then performs a binary search for each of those grid cells to find potentially
colliding particles. Next, if the rigid is compound or concave, each fluid particle
is tested against the rigid's midphase. The issue is that if the rigid body has a
very large AABB relative to the particles, then the algorithm would perform a
very large number of binary searches.

In
BulletFluids/Sph/btFluidSphRigidCollisionDetector.cpp
btFluidSphRigidCollisionDetector::performNarrowphase()
try replacing

Code: Select all

	grid.forEachGridCell(rigidMin, rigidMax, particleRigidCollider);
with something like (not sure if the function names are correct, see btFluidSortingGrid.h)

Code: Select all

	for(int i = 0; i < grid.getNumGridCells(); ++i)
	{
		const btFluidGridIterator& FI = grid.getGridCell(i);
		particleRigidCollider.processParticles(FI, btVector3(), btVector3());
	}
to skip the grid-AABB test. If there are other, small rigid bodies in the simulation,
it would probably be best to add a branch that checks whether the rigid body has
a relatively large AABB and use the for loop above only for that case.

That may not solve the issue though, as the sphere-trimesh collision algorithm is
slow by nature. How many particles are in the simulation? How many triangles
does the mesh have? Even using, say, 4000 particles, on a simple heightfield mesh
of 300 triangles would be fairly slow.

If the mesh's triangles are directly being used for collisions, it could greatly help
performance to use convex decomposition to approximate the mesh with a group
of convex hulls(see the HACD demo?). With particles that are much smaller than
the mesh, the approximation would not be noticeable.

If you do not mind using an even more unstable codebase, there is also a work in
progress Bullet3 fork(https://github.com/rtrius/bullet3), which runs entirely on the
GPU, but it is far from complete. For example, it is currently not possible to add or
remove particles while the simulation is running.
Rvo
Posts: 21
Joined: Fri Nov 01, 2013 1:44 pm

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by Rvo »

Thanks for your answers. At the moment I seem to have a satisfying performance but am I not 100% sure what changed. I am using a BvhTriangleMesh and it seems to be working nicely with 250 FPS in Ogre, but the mesh consists of "only" a few hundred vertices - which is fine for me.

My main problem that still remains is that, if I make the particles 0.01 meter in radius, the the simulation seems to run "too fast". When still using 1 meter radius particles, they fall down smoothly (from say 100 meters height), seemingly correct (although a bit slow, I assume this has to do with the height / acceleration?). When scaling them by 1/100th (and dropping from 1 meter) then their velocities are vastly increased and the particles are shooting all over the place. I have tried fiddling with the simulation scale, smoothing radius, etc but I cannot seem to make it work like it should. I also adapt the particle mass accordingly. Am I missing something paramount, or was it never intended to make the partciels this small and is there some code that assumes 1 meter particle radius?
rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

It should be possible to decrease the particle size. In theory, only
m_simulationScale needs to be adjusted in order to shrink the fluid
simulation. In practice, however, the simulation is sensitive to floating
point rounding and so it will probably be necessary to adjust other
parameters as the scale changes. I've tested a simulation scale as
high as 0.128 without having to change any other parameters. That's
not to say that there absolutely will be issues at higher scales, though.
It is just that I have not tested it.

To elaborate on the simulation scale, it determines how much the particles
are scaled up. The scale is 1 / m_simulationScale. So, by default, the
relative size of a particle is 1 / 0.004 == 250; that is, a particle 1m in
size is scaled to 250m compared to a rigid body. In order to make the
particles smaller, the simulation scale should be increased. For instance,
a simulation scale of 0.008 is half as large as the default (1 / 0.008 == 125).

Try starting with a simple scene, without any rigid bodies or collision
objects and enable the AABB boundary. Then slowly increase the simulation
scale, say, 0.004 to 0.008 and so on. No other parameters should be
changed at this stage, and the simulation should continue to be stable
as the scale is increased.

When the particles are small enough, then add rigid bodies and adjust the
'world scale' parameters, such as m_particleRadius and m_particleMargin,
in order to make them collide correctly.
When scaling them by 1/100th (and dropping from 1 meter) then their velocities are vastly increased and the particles are shooting all over the place.
When the scale is reduced, and after the particles move all
over the place, does the simulation settle down, or do they keep moving?
If they only move violently at the start of the simulation, it may be
that the particles are deeply intersecting the AABB or a rigid body.
Another possibility is that the particles are spawned too closely together.

If you can create a similar situation in the FluidSphDemo, it would help
in figuring out the issue.

I've commited the grid cell search fix above and made a minor adjustment
for smaller scale simulations at the Github repository.

Thanks for the feedback.
Rvo
Posts: 21
Joined: Fri Nov 01, 2013 1:44 pm

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by Rvo »

Thank you for the detailled explanation, it makes a lot of sense. I actually thought reducing the simulation scale was the way to go; smaller particles, smaller scale. I have looked into the things you said, testing it with a bare scene and going from there, adjusting the simulation scale first and trying other things like the margins/radius.

Does the scaling also explain the following: When I calculate a position for an emitter (in Ogre3D, but in meter units) and its previous position, I get a certain velocity that I apply to a particle that spawns at the new emitter location. I'm fairly confident these numbers are correct, yet the velocity seems inaccurate. With slow arm movement, the particle gets flung into the air so to speak. By scaling the velocity by 1/100th it seems more realistic. From what you tell me, the simscale of 0.004 means that maybe the velocity I give is 'applied' 250 times larger than it should compared to the real units from the rigid body? If that makes any sense?

It all seems pretty stable when setting the simulationscale *= 20, with everything settling down. I'm still not sure what is a good simulation scale, and what does it matter what the scale is? I have a mesh that is 2 meters high and I'd like to have a visually appealing SPH-fluid that interacts correctly with rigidbodies, like my bvhTriangleMesh. So, when I spawn particles above the mesh, I want it to collide correctly so I can apply fiction/restitution coefficients to mimic effects like blood (for example). I'd like the particles to be "kind of" real size, if Bullet does not really allow for particles < 0.025m radius (0.05m diameter collisionshape?) then I can scale up my mesh a bit. Like I said, I am aiming for bleeding effects so the particles are fairly small. What does the simulationscale matter (say 250m vs 12.5m) and what should I aim for in my scenario? So what should I go for, is 12.5 good or still unrealistic? Should I adapt by scaling velocities passed to particles? I'm not sure...

As for performance, I cannot set the simulationscale to 20+ atm, it works but the performance is quite unsatisfying. This seems to happen when I add a rigidbody as the floor, just like in the SphDemo app (BoxShape(50, 50, 50), translated in the Y-direction by -50). When I remove the floor, it's all fast and smooth. Kind of odd. When taking the standard simulationscale and 11000-ish particles and not modifying anything for the rest, just a boundary and floor-rigidbody (50, 50, 50), then the particles really look like a fluid and performance is great. Setting the simulationscale to 0.128 like you mentioned works only when there are no rigid bodies - else the speed drops drastically. Although the fluid seems pretty odd at that scale I must say. Setting it to 0.032 (so, 8x larger) works okay, however it almost looks like the fluid is slowed down and the fluid looks really compressed, nor does it settle down, which was not the case with the original simulationscale. Does the smoothing radius need to be adapted equal to the simulation scale by any chance?

I see you also adapted btFluidSphSolverPCISPH.h, is that solver any good for me or should I stick to the default solver (both GPU or CPU)?

One more thing: each frame of my mesh's animation I rebuild its bvhTriangleMesh to get the most accurate collisionshape for that frame, even if that is mighty slow compared to approximate compound shapes (I need to for my approach). It seems to work well, at least when keeping the particle radius at 1m for now, but sometimes it fails to correctly detect collisions, although not very often. Is using a GImpact shape going to change anything or would you say that will not make a difference or perform worse?

Thank you again, your help and explanations are very helpful :)

Rvo.


P.S. The internal fluid timestep is 0.003 - what is the idea about this fixed timestep? I tried setting it dynamically to the same timestep used in StepSimulation (it's around 0.0003 with a high framerate) but that seems to really "upset" the simulation?
rtrius
Posts: 43
Joined: Sat May 26, 2012 1:09 am

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by rtrius »

By scaling the velocity by 1/100th it seems more realistic. From what you tell me, the simscale of 0.004
means that maybe the velocity I give is 'applied' 250 times larger than it should compared to the real units from
the rigid body? If that makes any sense?
Correct. The velocity is at simulation scale so the velocity at
rigid body scale should be multiplied by m_simulationScale before it is applied.

There is not really a 'correct' simulation scale, just use what looks right. It should also
not be necessary to change the smoothing radius along with the simulation scale.

In order to improve the performance with rigid bodies, try reducing
const int MAX_CELL_THRESHOLD = 1000;
in BulletFluids/Sph/btFluidSphRigidCollisionDetector.cpp.
If that does not solve the issue there is probably no solution with sufficient performance.
Rigid body libraries on CPU generally support no more than ~1000 colliding objects at 60fps,
and adding particle fluids greatly increases that amount.

The PCISPH solver is for testing. In my experience, it makes the fluid less compressible but
suffers from considerably worse performance. PCISPH does not seem to be stable past a time
step of 0.0016ms(1/2 of the default) and also requires iterating over all particles 6+ times per
frame, compared to 2 times for the default solver.

GImpact shapes can be used to simulate dynamic rigid bodies with triangle meshes. The main
difference is that it implements the calculation of the inertia tensor; bvhTriangleMesh does not,
so rigid bodies using bvhTriangleMesh are unable to rotate. It might also perform worse, but I
have not verified it. It's probably not useful if the mesh is rebuilt every frame.

On missing collisions: Bullet, as well as BulletFluids, does not compare the current and past
meshes so there will be undetected collisions if a triangle moves past a particle or rigid body.
As far as I am aware, there is not a way to simulate interactions with a dynamic triangle mesh
that is both accurate and fast. Bullet's soft bodies, for example, either groups triangles into
convex clusters or uses only the vertices of the mesh for collisions(which results in missing
collisions if the triangles are too large). Aside from reducing the speed of the animation, or by
running several physics frames for each rendering frame, any solution would require substantial
changes to the code.

The fluid time step is different as by default the rigid body simulation runs at 16ms/frame and
the fluid is much more sensitive to high time steps. The highest stable time step in the literature
for this type of simulation is ~10ms and that requires reducing the stiffness which makes the fluid
look less realistic. There are also other methods, but many of them iterate over all particles several
times per frame.

Unfortunately, the library is not capable of simulations using dynamic triangle meshes with the
exact same physical and graphical representation. Even with major optimizations, the performance
is insufficient.
User avatar
Erwin Coumans
Site Admin
Posts: 4221
Joined: Sun Jun 26, 2005 6:43 pm
Location: California, USA
Contact:

Re: Integration of FLUIDS v.2 (SPH Fluids)

Post by Erwin Coumans »

It would be nice to see fluids V3 integrated with Bullet 3.x, using OpenCL. http://fluids3.com and github.com/erwincoumans/bullet3

Currently fluids V3 is CPU and CUDA, an OpenCL port would be a good start.
Post Reply