Fog of War Part 2
Welcome Pioneers to a brand new edition of Weekly Apart, the blog series that brings you the latest news on all things Dawn Apart (apologies for skipping last week's episode but we had our plates full with dev stuff - sorry!). In today's post we want to revisit our Fog of War system that we introduced in a post a few weeks ago. We weere able to finally put our heads in the clouds and wanted to share the details of how we got it (mostly) done!
Before we start, as always a quick reminder to join our Discord server (the good folks on there will get exclusive access to our demo before anyone else) and follow us on our socials. And of course tell your fellow base building/automation/colony sim enthusiasts to throw Dawn Apart onto their wishlists. In the last week we continued to climb the SteamDB charts well into the top 500 of most wishlisted games but any additional support is much appreciated!
[h3]Fog of War: A Tale As old As 1896[/h3]
Previously we talked about implementing a fog of war into the game, and the last couple weeks it was finally time to put our heads in the clouds and get it (mostly) done. So our end goal was to have our pioneers and buildings clear away the clouds which block the player’s vision revealing more of the world. It’s for sure the biggest visual change we’ve had since we added water to our world gen system. Having a procedurally generated game also brought about some challenges as we want a potentially infinite world and the fog of war should work and persist the whole time. To the surprise of absolutely no one we ended up going with a chunk based approach like we do with our terrain meaning we create new entities as the camera moves around which encompass a smaller portion of the map. Currently each fog of war chunk contains 8x8 2.5m^2 tiles and those pretty much just give us info about the current state of the tile. That really boils down to thre different states:
For buildings and pioneers we just grab all tiles which are within their vision range and mark them as currently seen by setting a specific bit and then going through and clearing those values at the start of the next frame. For clarity each tile really only needs 2 bits for it’s state although we use these three:
We’ll end up doing something a little smarter for static vision granting entities, like buildings, but for now it’s one of the very few todos we’ve got.
Once the fog of war data has been resolved for the frame we get it ready for rendering. We know which fog of war entities are currently visible as we load in and disable them earlier in the frame based on the camera, so we just take all the ones that are currently rendered and format their data for our shader. In the end our fog of war data, which is spread across numerous chunks, needs to make its way into a single texture. We ended up solving this by finding the entity with the smallest position out of the currently rendered entities and using that position as an origin. Once we know that we can figure out which indices in the buffer of pixel data each entity writes to. This is also done in a fully bursted parallel job since each entity is guaranteed to write to a unique span of indices. Once that’s done we end up with a texture looking like this:

Not very good. To smooth it out a bit we first upscale the image and run a blur over it. There is a great post from Riot Games with a bit more information on that process here: https://technology.riotgames.com/news/story-fog-and-war. After smoothing we end up with this:

4x the resolution and much blurrier... It’s perfect!
To actually apply the fog of war we do a full screen custom pass where we reconstruct the world position from the depth buffer which ends up looking like this beautiful mess of colors:

Not especially helpful at all for anyone with eyes who might be looking at it, but it does fill out this post quite nicely. With that information we can figure out which tile each pixel belongs to, also based on the lowest rendered fog of war entity’s position, and with that information do a lookup into the texture. For pixels which are obscured in fog we then sample a cloud texture. Below is an initial test we did directly using the world position to sample the clouds and it’s a bit hard to see, but it has some major drawbacks:

The problem is wherever there are hills you end up sampling the same uv coordinates in the texture as the x and z position of the world doesn’t change. To solve this we ended up projecting the world position onto an arbitrary plane between the camera and the world and use that position to do the texture sampling. That ends up looking like:

Here is another comparison shot that shows off the different sampling methods:


And here is a birds eye view of the same scene that brings everything above together:

We will still be iterating on the visual aspect as polishing is sort of an endless task, but for now it’s a really cool system that was a ton of fun to design and implement. See you next week when we plan to introduce our workbench system!
Before we start, as always a quick reminder to join our Discord server (the good folks on there will get exclusive access to our demo before anyone else) and follow us on our socials. And of course tell your fellow base building/automation/colony sim enthusiasts to throw Dawn Apart onto their wishlists. In the last week we continued to climb the SteamDB charts well into the top 500 of most wishlisted games but any additional support is much appreciated!
[h3]Fog of War: A Tale As old As 1896[/h3]
Previously we talked about implementing a fog of war into the game, and the last couple weeks it was finally time to put our heads in the clouds and get it (mostly) done. So our end goal was to have our pioneers and buildings clear away the clouds which block the player’s vision revealing more of the world. It’s for sure the biggest visual change we’ve had since we added water to our world gen system. Having a procedurally generated game also brought about some challenges as we want a potentially infinite world and the fog of war should work and persist the whole time. To the surprise of absolutely no one we ended up going with a chunk based approach like we do with our terrain meaning we create new entities as the camera moves around which encompass a smaller portion of the map. Currently each fog of war chunk contains 8x8 2.5m^2 tiles and those pretty much just give us info about the current state of the tile. That really boils down to thre different states:
- Undiscovered
- Discovered but not currently seen
- Active or currently being seen.
For buildings and pioneers we just grab all tiles which are within their vision range and mark them as currently seen by setting a specific bit and then going through and clearing those values at the start of the next frame. For clarity each tile really only needs 2 bits for it’s state although we use these three:
- Has this tile ever been seen
- Is this tile currently seen by a pioneer
- Is this tile currently seen by a building
We’ll end up doing something a little smarter for static vision granting entities, like buildings, but for now it’s one of the very few todos we’ve got.
Once the fog of war data has been resolved for the frame we get it ready for rendering. We know which fog of war entities are currently visible as we load in and disable them earlier in the frame based on the camera, so we just take all the ones that are currently rendered and format their data for our shader. In the end our fog of war data, which is spread across numerous chunks, needs to make its way into a single texture. We ended up solving this by finding the entity with the smallest position out of the currently rendered entities and using that position as an origin. Once we know that we can figure out which indices in the buffer of pixel data each entity writes to. This is also done in a fully bursted parallel job since each entity is guaranteed to write to a unique span of indices. Once that’s done we end up with a texture looking like this:

Not very good. To smooth it out a bit we first upscale the image and run a blur over it. There is a great post from Riot Games with a bit more information on that process here: https://technology.riotgames.com/news/story-fog-and-war. After smoothing we end up with this:

4x the resolution and much blurrier... It’s perfect!
To actually apply the fog of war we do a full screen custom pass where we reconstruct the world position from the depth buffer which ends up looking like this beautiful mess of colors:

Not especially helpful at all for anyone with eyes who might be looking at it, but it does fill out this post quite nicely. With that information we can figure out which tile each pixel belongs to, also based on the lowest rendered fog of war entity’s position, and with that information do a lookup into the texture. For pixels which are obscured in fog we then sample a cloud texture. Below is an initial test we did directly using the world position to sample the clouds and it’s a bit hard to see, but it has some major drawbacks:

The problem is wherever there are hills you end up sampling the same uv coordinates in the texture as the x and z position of the world doesn’t change. To solve this we ended up projecting the world position onto an arbitrary plane between the camera and the world and use that position to do the texture sampling. That ends up looking like:

Here is another comparison shot that shows off the different sampling methods:


And here is a birds eye view of the same scene that brings everything above together:

We will still be iterating on the visual aspect as polishing is sort of an endless task, but for now it’s a really cool system that was a ton of fun to design and implement. See you next week when we plan to introduce our workbench system!