Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Eniko Fox
Eniko Fox
@eniko@mastodon.gamedev.place  ·  activity timestamp 2 days ago

My blog post about how software rendered depth based occlusion culling in Block Game functions is out now!

https://enikofox.com/posts/software-rendered-occlusion-culling-in-block-game/

Note! Any public or quiet public/unlisted replies to this toot will be shown as comments on the blog. If you don't want that please use followers only or private mention when replying

#GameDev #IndieDev #ProcGen

Eniko does bad things to code

Software occlusion culling in Block Game

My GPU is the integrated Radeon Vega 8 that comes with my AMD Ryzen 7 5700G CPU. I tell you this so you know that my workstation is not a graphical computing powerhouse. It is, in fact, quite weak. To its credit my integrated GPU shows up as 48% faster on UserBenchmark than the GPU in my low end hardware target; a laptop I bought in 2012.That, and the fact I want my game to run well even on a potato, is why I recently decided to try my hand at writing a software rendered occlusion culling solution for the Block Game (working title) I’m developing as I’ve always been interested in the idea. Blocks and chunks are axis aligned cubes, which makes things easier, and block games tend to have a ton of hidden geometry in the form of underground caves. There are other ways to cull these, but the algorithms tend to be fairly complex and this seemed like a good way to avoid that complexity and stick with something very conceptually simple.In this post I’ll be explaining the development process and the solution that I eventually landed on. If you like you can also read the development thread I posted on Mastodon and Bluesky.Before I start though I’d like to say that this came out quite well, better than I expected. It runs in half a frame at 60 FPS or less (threaded, of course) and generally culls at least 50% of the chunks that survive frustum culling. Above ground, looking straight ahead at the horizon it’ll cull around between 50 and 60% of chunks, but indoors and below ground in caves it can cull upwards of 95% of chunks, resulting in framerates of 400+ even on my weak system. All around a resounding success, though it has some cases where it breaks down which I’ll touch on at the very end of this post.Comparison of depth occlusion culling on/off, off on left, on on right.
  • Copy link
  • Flag this post
  • Block
Cat 🐈🥗 (D.Burch) :paw:⁠:paw:
Cat 🐈🥗 (D.Burch) :paw:⁠:paw:
@catsalad@infosec.exchange replied  ·  activity timestamp 2 days ago

I like cats :3

  • Copy link
  • Flag this comment
  • Block
Cat 🐈🥗 (D.Burch) :paw:⁠:paw:
Cat 🐈🥗 (D.Burch) :paw:⁠:paw:
@catsalad@infosec.exchange replied  ·  activity timestamp 2 days ago

Yay! I'm on Eniko's blog :3

  • Copy link
  • Flag this comment
  • Block
Nazo
Nazo
@nazokiyoubinbou@urusai.social replied  ·  activity timestamp 2 days ago

@catsalad What's this you say? It can't be true!

  • Copy link
  • Flag this comment
  • Block
Joshua Barretto
Joshua Barretto
@jsbarretto@social.coop replied  ·  activity timestamp 2 days ago

@eniko > Realizing that nearby things are big and far away things are small (a truly revolutionary insight, I know, please applaud)

Tbf, it took humans a surprisingly long time to understand perspective projection enough to start using it artistically, and there's a theory that ideas like "far = small" is an entirely socially constructed conclusion and that pre-pinhole camera societies mostly did not relate distance and scale to one-another at all

Great blog post, loads of fun details, thanks!

  • Copy link
  • Flag this comment
  • Block
ianh
ianh
@ianh@mastodon.social replied  ·  activity timestamp 2 days ago

@eniko i wonder if it's possible to shrink the cubes by 1px in only the directions where there is no adjacent cube (to solve the chunk gap issue)

  • Copy link
  • Flag this comment
  • Block
Joshua Barretto
Joshua Barretto
@jsbarretto@social.coop replied  ·  activity timestamp 2 days ago

@ianh @eniko Because it's software rendering, you could easily mark each pixel as 'interior' vs 'edge' and have the rule be that an edge pixel is only considered to be occluding if it's fully surrounded by other edge or interior pixels with at least the required occlusion depth. I think this should avoid any of the edge cases that result in overzealous culling at higher resolution and removes the need for the 'inset' at all

  • Copy link
  • Flag this comment
  • Block
Eniko Fox
Eniko Fox
@eniko@mastodon.gamedev.place replied  ·  activity timestamp 2 days ago

@jsbarretto @ianh only doing the inset where the subchunks aren't adjacent to another block is interesting

I don’t think merely marking edges as edges would work though? You really need the appropriate depth value there to do the checks unless you simply decide edge depth values always pass during the occlusion check, which would make things worse

  • Copy link
  • Flag this comment
  • Block
Joshua Barretto
Joshua Barretto
@jsbarretto@social.coop replied  ·  activity timestamp yesterday

@eniko @ianh You keep the depth value, you just mark the pixel with an 'is edge' flag &the burden of proof for "does this pixel occlude this thing?" is greater: it requires occlusion from all neighbouring pixels to return "true" too. This means that on the edges of objs you don't end up accidentally culling far objects that are just barely visible at the high res, but also means 'watertight' surfaces (like a flat region of terrain composed of several chunks always occlude their background.

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-beta.35 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct