Is there a fake depth buffer anti-aliasing algorithm?

I have applied FXAA algorithm in my OpenGL application lately. I don't fully understand this algorithm, but I know that it uses the contrast data of the final image to selectively apply blur. The post-processing effect makes sense. B, since I am using deferred shading in my application, I already have a scene depth texture. Using this it would be much easier and more accurate to find the edges to blur there.

So, is there a known anti-aliasing algorithm using a depth texture instead of the final image to find the edges? By fakes, I mean a pixel-based anti-aliasing algorithm, not apex.

+3


source to share


1 answer


After some research, it turned out that my idea is already widely used in deferred renderers. I decided to post this answer because I came up with my own implementation that I want to share with the community.

Based on changes in the depth gradient and changes in the angles of the normals, a blur is applied to the pixel.

// GLSL fragment shader

#version 330

in vec2 coord;
out vec4 image;

uniform sampler2D image_tex;
uniform sampler2D position_tex;
uniform sampler2D normal_tex;
uniform vec2 frameBufSize;

void depth(out float value, in vec2 offset)
{
    value = texture2D(position_tex, coord + offset / frameBufSize).z / 1000.0f;
}

void normal(out vec3 value, in vec2 offset)
{
    value = texture2D(normal_tex, coord + offset / frameBufSize).xyz;
}

void main()
{
    // depth

    float dc, dn, ds, de, dw;
    depth(dc, vec2( 0,  0));
    depth(dn, vec2( 0, +1));
    depth(ds, vec2( 0, -1));
    depth(de, vec2(+1,  0));
    depth(dw, vec2(-1,  0));

    float dvertical   = abs(dc - ((dn + ds) / 2));
    float dhorizontal = abs(dc - ((de + dw) / 2));
    float damount = 1000 * (dvertical + dhorizontal);

    // normals

    vec3 nc, nn, ns, ne, nw;
    normal(nc, vec2( 0,  0));
    normal(nn, vec2( 0, +1));
    normal(ns, vec2( 0, -1));
    normal(ne, vec2(+1,  0));
    normal(nw, vec2(-1,  0));

    float nvertical   = dot(vec3(1), abs(nc - ((nn + ns) / 2.0)));
    float nhorizontal = dot(vec3(1), abs(nc - ((ne + nw) / 2.0)));
    float namount = 50 * (nvertical + nhorizontal);

    // blur

    const int radius = 1;
    vec3 blur = vec3(0);
    int n = 0;
    for(float u = -radius; u <= +radius; ++u)
    for(float v = -radius; v <= +radius; ++v)
    {
        blur += texture2D(image_tex, coord + vec2(u, v) / frameBufSize).rgb;
        n++;
    }
    blur /= n;

    // result

    float amount = mix(damount, namount, 0.5);
    vec3 color = texture2D(image_tex, coord).rgb;
    image = vec4(mix(color, blur, min(amount, 0.75)), 1.0);
}

      

For comparison, this is a scene without any anti-aliasing.



scene without anti-aliasing

This is the result of applying anti-aliasing.

anti-aliasing scene

You may need to view full resolution images to appreciate the effect. In my opinion, the result is sufficient for a simple implementation. Best of all, there are almost no jagged artifacts when the camera moves.

+3


source







All Articles