How do I write a "Texture Breathing" shader in GLSL?

I am working on a small 2nd video game and while searching on the Internet for something that has nothing to do with it, I found this video: http://vimeo.com/67886447 I really like it. I want this.

The author describes the process:

The effect is created by calculating the vector derivative of the source then applying iterative advection along the resulting axes. controls the secondary scalar field and the strength and magnitude of advection and allows for a range of interesting effects including pulsing, waving and breathing.

As I understand it: the gradient field is calculated, then the pixels are moved in the direction and magnitude of the corresponding vectors. I believe that time is a factor of a vector quantity.

So, I think I understand this process in my head, but since I'm kind of new to GLSL and shaders in general, I'm not sure how to write the code.

This is how I see the outline of the code so far:

Iterate through the image to fill the matrix with vectors resulting from some sort of edge detection algorithm. Iterate through the image one more time and displace all pixels by magnitude vector * time.

A few specific questions:

Does it even work, performance is reasonable (on an average PC, of ​​course)?

Is it possible to use a simple edge detection algorithm (by checking 8 neighboring pixels, comparing the difference between them and storing this as a magnitude, the direction of the vector will be determined by the angle between the two pixels with the largest difference).

How can I displace pixels? My guess is that moving the pixel will either leave blank space where it was, or if I cloned the pixel there would be a lot of weird overlap and the image would come out bad.

edits:

I just realized that doing it in one pass would be better: calculate the vector and then immediately move the pixel. What do you think?

I guess I was wrong and thought of it as iteration.

This is the code I got so far. It's simple and only works along the x-axis.

uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 vTexCoord;
uniform sampler2D u_texture; //diffuse map

void main( void ) {



vec2 pos = vec2(1,0);
vec4 px1 = texture2D(u_texture, vTexCoord+(pos/resolution));
vec4 dif =  px1 - texture2D(u_texture, vTexCoord);


vec4 color = texture2D(u_texture, vTexCoord+(vec2((dif.r+dif.g+dif.b)*time,0)/resolution));


    gl_FragColor = vec4(color.xyz,1.0);

//gl_FlagColor = color;
}

      

Now I just need to do this for the rest of the axes. The effect really looks like it is supposed to do.

I did it for the Y axis and this is what I got how to fix the artifacts, so it seems like the colors are actually expanding and not just moving.

breathable thing http://rghost.ru/57475312/image.png

(Shader only applies to the background image)

+3


source to share


1 answer


You need to be more like a functional programmer in GLSL. You are not iterating over anything; you provide a function that, given the appropriate inputs, will feed the output snippet. The GPU deals with applying this feature when needed. Depending on vendor-specific extensions, the inputs must be completely different from the outputs - therefore the inputs are unchanged. Also the GPU organizes where you output.

So, no, you wouldn't try anything. And you can't write a shader that chooses where it will output, only what it will output.



What you probably want to look into is Stam Advection , which pretty much inverts the problem. For each exit, P, you will look for a record of where the colors that contribute to it come from. Since you are free to enter input (as opposed to performance), you can collect the appropriate output.

+5


source







All Articles