Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ten Minute Physics is a great resource! Matthias Mueller makes concise demos and explains them really well. (Though I think it would take someone without existing familiarity with the subject matter a lot longer than 10 minutes to digest the results. Lots of rewatching!)

A really interesting results that's come out of Matthias and co.'s research in the last few years that is still percolating its way through the realtime physics community is the so called "small step" technique. Explained at a really high level: integrating a physical system requires solving large sparse systems of nonlinear equations. Traditionally, the equations would be solved once a frame with iterative methods (often with a fixed number of iterations, such as 10 or 20 per frame). It turns out it's massively more efficient for accurate results to just run a single iteration of the solver, but increase the physics framerate by 10 or 20 times (or actually, since there's a tiny bit of overhead to process a physics frame, for equivalent computational load, you can usually only increase the framerate 9-18 times). The intuition here is that even though 1 iteration of the solver is much less accurate than 20, by only taking a small step forward in time the error drift in the physical system is even lower. It's only somewhat handwavy to say solver iterations reduce error at a linear rate, but decreasing physics frame time reduces error at an O(n^2) rate.

While almost all engines have added an option to use the small step technique, one issue slowing down adoption is that it is too accurate! The old way of solving the nonlinear equations once a frame introduced a huge amount of numerical damping. That greatly reduced accuracy, but did serve to stabilize simulations, in a way. With the small step technique, numerical damping is greatly reduced, so intentional physical models of damping need to be added back in and tuned by hand. People aren't used to working this way, so designers are still toggling back to the "large step" method. It's likely once people figure out how to properly use this new technique, you'll see a noticeable uptick in the fidelity of physics in games.



This is very coincidental timing. The physics simulation in the Source game engine, and Garry's mod, absolutely cemented in me an interest in computational physics. I really wanted to see fully simulated fluid water in games, particularly GMod, but it never really happened, 15 years ago.

This never stopped bothering me, so I recently loaded up the game and programmed an entity that was simply a particle that repelled itself with inverse distance-squared falloff. I was only able to get up to around 100 particles before the game slowed from 200+ fps to ~10. There seemed to be a huge amount of overhead in the lua scripting system's connection to the game/physics engine. Something about getting the position of 100 particles, or asking for all the particles within a given range of each particle, even for only 100, was enough to send frametimes over 100ms-1000ms. I wanted at least 500-1000 particles at at least 10fps, and I'm sure in terms of raw math, that is nothing for modern CPUs, even with the most naive implementation. I had to figure out exactly which lua functions had the most overhead for such a small amount of simple math.

So, I re-wrote the script, seeking to chase every rabbit hole that would let me increase the number of particles while maintaining over 10fps.

1. Switch from per-particle loop, to a global loop that cached each particle's position per frame, so it was more pure math and less engine bindings/"context switching". Approx 50 particles to approx 150.

2. Limit the particle simulation to 10hz instead of 66hz+, and adjust the force scale to compensate. Raised the limit to approx 300-400 particles.

3. A dynamic scaling system, so that the system would never try to simulate more than 100 particles at once, per 10hz tick. Raised the limit to approx 700 particles.

At this point, there are now other non-CPU bottlenecks. Any method of drawing 700 entities, even basic wireframe?, even with a very high end GPU on a nearly 20 year old game, is significant, and there's no easy way for me to break past this.

I am sure if I learned how the source engine's plugin system worked, and I hooked into the physics engine using C++ instead of however the lua bindings work, I should be able to simulate a thousand particles at 100+fps, but that's about where the scope of the project starts to contend with "real life", and I continue to wait for a 3D sandbox game with satisfying fluid physics.

Your optimization method of "only one iteration per frame" is in a sense, the same, and in a sense, the opposite of my method of simply not even simulating every particle every frame... I just didn't have the spare performance for that. Then again, I'm only simulating a single force field with no proper collision detection or anything fancy


GPUs can definitely handle more than 700 entities. Here's 100k in a web browser: http://david.li/fluid/

But even most offline rendered fluid simulations are not that convincing to my eyes, let alone real time ones. It will be a long, long time before we have truly realistic behavior of real time water in unconstrained settings for games.


Have you seen RealFlow? I was having fun with their fluid physics simulator a few years ago, it's really good.

https://realflow.com/


It looks cool, but I find that there's a characteristic "blobby" look to these simulations, especially visible in mostly flat parts or whenever thin sheets of liquid form and break apart. I imagine that increased resolution can mitigate it but the cost eventually becomes prohibitive even for offline simulation so people settle for "good enough". And I think foam/bubbles are usually kind of hacky (when present at all) rather than being simulated in a principled way.


Yeah, it must be something specific about the overhead in the Source engine when drawing entities, but it doesn't matter if I render the particles as 2D sprites or as 3D models, both of them have a similar performance impact over not rendering them at all when there's 700, and there's no way it's a hardware limitation.

The GP shared some links below from other indie devs, there is a realtime water simulation that looks pretty dang good, though it looks to be on the scale of a human-sized tub.


The problem sounds like you're doing O(N^2) calculations at every tick because each particle has to interact with every other particle. To make it faster you should keep some spatial datastructure so that you can query the other particles within a certain range of your current particle and just interact with those, ignoring the rest.

The usual implementation is an octree or something similar.

That said, N=100 should still be fast with the naive algorithm so I'd definitely move off the scripting language to something faster.


For particles of uniform size, a simple uniform grid will work best. If you set the grid cell size to the particle diameter, then you only need to search the adjacent voxels for each particle. The uniform grid method is also easy to port to the GPU, unlike other acceleration structures like BVHs or Octrees.


I do have a parameter so that each particle only takes into consideration other particles within 750 units, when the free space is 3000x3000x3000 units, but this parameter did not seem too critical, and it's still calculating the distance between each pair twice technically, so I guess I could speed that up. Vector/"table?" operations seemed slower than plain loops, not sure if there's plain interpreter speedup possible there?


These references helped me a lot when building a spatial hashed fluid sim that runs on the GPU:

https://developer.download.nvidia.com/presentations/2008/GDC... https://wickedengine.net/2018/05/21/scalabe-gpu-fluid-simula...

(See from page 16 on the NVIDIA presentation)


Realtime fluid sim is making lots of progress, though full 3D sims still appear to be computationally out of reach, at least for games that aren't focused exclusively on them. Here are some fun fluid sim examples from members of the Ten Minute Physics discord:

https://twitter.com/Michael_Moroz_/status/159507609967093350...

https://twitter.com/MytinoGames/status/1600278137564184579


I might have to join the discord, that first link is extremely satisfying, and the shading on the materials in the 2nd one does a great of giving the feeling of the 3rd dimension without the computational complexity. Thank you for sharing


> At this point, there are now other non-CPU bottlenecks. Any method of drawing 700 entities, even basic wireframe?, even with a very high end GPU on a nearly 20 year old game, is significant, and there's no easy way for me to break past this.

I've run into this before - it _can_ be fast if your engine supports some form of mesh instances and exposes it to the user.


Would it be slow still if it recalculates everything?


If what recalculates what?


The mesh calculation meant you are not in control. For simple model ok. For complex one, Realky can optimize eg not calculate anything 3 meter away.


My comment only referred to rendering.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: