Skip to content

Latest commit

 

History

History
266 lines (206 loc) · 13.7 KB

devlog.org

File metadata and controls

266 lines (206 loc) · 13.7 KB

Hadoom Developer Log

<2014-12-15 Mon>

Spent most of the day continuing work on attempting to implement the ideas in the paper “exponential soft shadow mapping”, though didn’t reach the desired solution. Spent a lot of time trying to successfully render into an integer texture. The following checklist should help me avoid these problems in the future:

  1. Create the texture with GL_RG32UI (or another appropriate integer texture).
  2. Make sure to output using the correct type. Note that vectors have their own types too! vec2 is a vector of floats - the aforementioned texture should be used with a uvec2.
  3. Make sure to use the correct type of sampler. There are samplers for each type - GL_RG32UI should be sampled with usampler2D.
  4. Make sure to sample with the generic texture() function.

To store floating point values in an integer texture, we can either

  1. Use floatBitsToInt – essentially a “reinterpreting” cast.
  2. Scale the float (assumed to be in the range [0, 1]) by 2^n, where n is the amount of bits of precision required.

WARNING! The expression 2^n in GLSL does not mean what you might think it means. ^ is the operator for bitwise XOR. We should use exp2 to raise 2 to an arbitrary power. However, there was another problem - scaling by exp2(16) seemed to be ok, but scaling by exp2(17) was not. It seems that we have to use a double rather than a float, or Strange Things happen. I should revisit this at some point to form a better understanding of what’s going wrong.

Past that hurdle, I was able to successfully store various depths in my integer depth texture. I believe, from reading the paper, that I should be storing exp(cz) in the texture, but I am only storing my (linear) z value:

fragmentdepth = ivec4(uint(round(esmNormalized * scale)));

It’s unclear to me how you store exp(cz) in the integer texture and later actually do something with it, as you have to scale it to [0, 1]. The scaling could be reversed, but I think I’m missing something here.

I didn’t get to make a start with their idea of “tiled” summed area tables, as I spent most of the time trying to work out how the blocker depth estimate works. I found another people with a better discussion of this - “Real-Time, All-Frequency Shadows in Dynamic Scenes”. This paper lead me on to reading about “Convolution Shadow Maps”, but importantly has a good discussion on estimating the blocker depth.

<2014-12-16 Tue>

Started the day by wanting to look at computing the average blocker distance. Reading the “Real-Time, All-Frequency Shadows” paper again last night has made the computation clearer - you essentially perform the binary shadow test function and multiple the result against the depth of the point you are testing. If you average all of these, you get the average depth of blockers at a point. The use of the shadow function multiplied by the depth means we can use our same approximation to the shadow function, which means we can move the convolution to pre-filtering, provided we multiply the bias (exp(cz)) by the depth (so we store exp(cz)z).

However, this means that my depth shader should be outputting exp(cz) and scaling it, which it doesn’t currently do. I started by looking into that, but I haven’t managed to get anything satisfactory yet.

Another oddity came up during debugging - if I shade only the depth as seen by the light, for some reason the column appears below the column as scene by the camera. This doesn’t seem correct to me, as my understanding is that we are projecting the depth the camera sees onto the scene, so the column depth should be projected directly onto the column, and not the floor below it. I’d like to verify if this is a bug or correct before moving on.

<2014-12-16 Tue 11:07>

I solved the last problem mentioned above. While I thought that the projection was wrong, it turns out there isn’t a problem there at all. The problem was this one missing line from when I set up the light’s framebuffer object:

glFramebufferRenderbuffer GL_FRAMEBUFFER GL_DEPTH_ATTACHMENT GL_RENDERBUFFER rbo

That’s right - I built the framebuffer object that will be used to render the light, but I forgot to attach the render bufffer that will be used to store depth information! Thus the weird projection was actually due to a badly drawn depth buffer, as I wasn’t making use of a depth buffer while rendering it. Oops.

<2014-12-16 Tue 15:23>

I decided to just crack on with getting a variable penumbra width without doing any prefiltering at all, just to get an understanding of what it is I’m actually trying to do. Using brute force box filters, I finally have the results I’m looking for! I begin by finding the average blocker depth - sampling a fixed size kernel around the sample point to find the depth of other points that would cast shadows. Once I know the average depth of blockers around this point, I can use this to determine the size of the box filter to use during percentage closer filtering.

Currently I’ve got this working with fairly small area lights and I am getting the results I’d expect. As the size of the area light increases, naturally the framerate drops rapidly - but this is to be expected as we’re using O(n^2) filters.

In the next few days, I should have an understanding of how to store the exponential data in my shadow map, at which point I can start looking at building a summed area table and getting back to constant time blurs.

<2014-12-18 Thu>

I’ve got to a point where I can reliably get results from the “percentage-closer soft shadows” work. I found the original shader online [1], which has let me understand more about the parameters in the implementation. I found out I was many orders of magnitude out on my parameters, which explains why I was struggling to reconcile my results with the paper. I am convinced the code has a bug though. Where they estimate the blocker depth search filter size with

lightSize / receiverDepth

I am certain that the correct formula would be

lightSize * receiverDepth

By my reasoning, the potential area of blockers around a point is directly proportional to its distance from the light. As we move away from the (area) light source, the frustum formed by the point we are shading and the area light intersects with less of the fixed shadow map plane. Taking the limit of distance from the light, eventually the area light will be coincident with the shadow map plane, thus we would desire a search size of 1 - the entire plane. After coming back to this formula over and over again, I’m convinced it’s the right one to use - and it seems to give pleasing results.

I’ve understood more about what it means to filter shadow test results. Performing a bilinear filter on the shadow map tests has to be done after the shadow map test - so we sample 4 texels around our point, perform shadow map tests on each of them and then bilinearly interpolate. This smooths the edges, but with a big PCF kernel there is still very clear banding. I shouldn’t spend any more time on this though, because having a non-linear spatial blur is never the goal anyway.

Next I want to move all of the shadow tests to use the exponential shadow mapping approximation. Then I’ll factor out the shadow map test from the convolution (as per “Convolution Shadow Mapping”). At this point, I might be able to start moving some of the calculation back to the light shader.

One other niceity - I finally implemented the ability to refresh shaders while Hadoom is running. This makes a huge difference to my productivity. I might even go all the way to using inotify to automatically refresh the shaders.

[1]: http://www.gamedev.net/topic/486151-soft-shadow-pcss/

<2014-12-19 Fri>

Following on from yesterday’s plan, I started to move the tests from a binary shadow test to the exponential approximation. It has been… a learning process. My initial attempt was to simple take the mean exp(c * blockerDepth) value, and then take the product with exp(-c * receiverDepth). Simple, right? Unfortunately, it glosses over the whole section of failure classification in the original research.

It turns out I hadn’t really given much thought to the assumption that the whole theory is built on – receiverDepth >= lightDepth. That is to say a light never travels further than the point that the view camera is seeing. With an infinite resolution for the shadow map and no filtering, this would make sense - casting a ray from the shading fragment to the light source would intersect a blocker (so the assumption receiverDepth > lightDepth holds), or the light-to-fragment and light-to-blocker depths are equal. However, in practice we don’t have an infinite resolution to work with, adn the paper clearly shows how this assumption will be violated.

A violation leads to the final shadow result tending to infinity. This gets even worse when you incorporate exp(c * blockerDepth) into a blur kernel, as a single bad test blows the whole test to infinity. Clamping prevents overbrighting, but you now lose out on gradual blurs. Bummer.

The paper has two classifiers for identifying this problem. I’ve gone with their thresholding classification, and once you use this, the results are almost back to the “reference” results from brute force PCF with the binary shadow test.

I’m currently battling with one oddity around the edge of the penumbra, where the shadow test fails. The paper suggests that one can fall back to PCF here, but at this point my results are very different from the ESM test, so there’s a clear discontinuity. Hopefully I can solve this soon.

I’m almost at a stage where I can move exp(c * blockerDepth) to the shadow map. The only place where I rely on the linear depth from the shadow map is in the blocker depth estimation, so that is my next focus.

<2014-12-20 Sat>

At long last I reached the moment I had been aiming for - percentage closer shadow mapping driven entirely by summed area tables! I spent a bit of time working with getting the exponential terms into the shadow map texture, mostly taking longer than necessary because I wasn’t being careful (exp(c * texture(...)) - for example!). Once I got that out of the way, I started to think about what to do with the SAT.

I convinced myself a while ago that an integer texture was the way to go, but I never actually put much work into the floating point texture. Once I let go of that fear, I pulled out the summed area table code I had written earlier, dropped it in and things almost entirely work!

I’m now encountering noise and errorenous results in some parts of the final shadowing, but it looks solvable. The “Fast local tone-mapping” paper suggests building the summed area table around the mean, which might recover precision. Another option is to try the “distribute precision” hack from before, and to use two channels.

Anyway, very happy that at long last - I’m finally at the first real goal. Exciting.

<2014-12-22 Mon>

I’ve implemented the idea of reducing the image by the mean first, mostly with better results. I spent a while trying to figure out the best way to do it, in the end using glGenerateMipmap and sampling from the highest mipmap level. This is a clear idea, but took me a while to learn the practice to achieve this goal. I went down a small dead end while trying to perform a controlled experiment to sample the average texel value (with glGetTexImage), but failed to control the input texture.

With that finished, I am correctly calculating a mean and offseting and the results are better, but they are still not really usable for large area lights. The moment the light size is increased, the scene rapidly becomes noise.

I think I’m probably going to have to go with summed area tiles, or some other derivative scheme.

<2015-01-01 Thu>

Lots of progress to report! I decided that shadow mapping was beginning to just get in the way of actual progress, so I’ve just stopped focussing on that entirely. I’m not sure what state it is in, but the knowledge is there to get it how I want, so I can always revisit that. Dropping this allowed me to consider what should come next.

First up, level representations. I revived my old `Level.hs` code, where I was experimenting with PHOAS and the work described in “Abstract Syntax Graphs for DSLs”. I liked this approach, and built a compiler from these world specifications to OpenGL resources. I’m not really doing any rewriting or anything that you could do with a DSL, but it’s been a fairly pleasent experience otherwise.

With the level now described by the DSL, I finally started working on lines between sectors. I now have a way to join two sectors with a line segment that has a no main wall texture, but does have an upper/lower texture. This allows joining sectors of different floor or ceiling heights. I’ve also now got the ability to have the mid texture span less than the entire size of the wall, which is allowing me to build doors.

Once I got the DSL working and had the ability to join varying height sectors, I had some fun defining a new test level. I found a really nice high resolution Doom texture pack, so I threw something together that features walls and a Doom bigdoor. It’s quite exciting seeing this pet project actually look like Doom.

Today I decided that I really needed to get on with collision detection, so I rolled my sleeves up and got on with it. I refined the line-segment intersection test, can now build 2D BSP trees, can compile levels to BSP trees, and have integrated BSP tree collision testing into the FRP physics network. It still needs a proper collision response, as right now you are simply denied the ability to move if you touch a wall.

I’d like to build some more complex levels soon to do things like water rendering, but that’s really going to need my level editor to work. I think I’m going to give up with a scrollable widget in GTK and just use a mouse button to drag the view around.