Multiple textures in one shader with ZBuffer and 2D alpha blending
I have multiple sprite sheets and in the scene I have to display multiple images from different textures. Each object on the sprite sheet can have a different Z value (it can be lower or higher than other objects). The Z values ββof objects from individual textures often overlap, and I also use alpha blending for transparency.
All of these factors force me to sort all objects submitted for rendering by their Z value (farthest objects start first). And this is where the problem arises. Doing 1 render call per texture allows only objects from one texture to be properly sorted and drawn at a time. The objects from the second texture could probably be drawn in front of some objects from the 1st texture, depending on their Z value. In this case, I cannot sort the objects correctly before painting, because it will take more to paint than the textures that give calls that are not valid I guess.
The only solution that has come down to me so far has been to create a fragment shader like this (changing the sampler depending on the changing one):
public static final String fs_Image =
"precision mediump float;" +
"varying vec2 v_texCoord;" +
"varying float v_texInd;" +
"uniform sampler2D u_tex0;" +
"uniform sampler2D u_tex1;" +
"void main() {" +
" if(v_texInd == 0.0)" +
" gl_FragColor = texture2D( u_tex0, v_texCoord );" +
" else" +
" gl_FragColor = texture2D( u_tex1, v_texCoord );" +
" if(gl_FragColor.a == 0.0)" +
" discard;" +
"}";
I don't like this solution, but it allows me to draw all objects even in 1 render call. If there were more than 2 textures, I doubt this could be an effective solution.
Another alternative would be to create z-value constraints for all objects from separate sprite lists, and then z values ββfrom different textures would not overlap, and I could sort and draw objects from multiple textures correctly in multiple render calls.
I am wondering which of these approaches is better? Or maybe there is a better solution? I would be very grateful for any help!
source to share
As you did, first the z-order of your objects. Then let's say your pixel can come from 4 different textures and you want a single pass render:
- Add vec4 unification to you fragment shader and use its components to multiply each vec4 in texture lookup
- Attach a texture index to each object
- Before drawing the object, bind the uniform, for example, all values ββare set to 0.0f, except for the value corresponding to the texture used, set to 1.0f
The fader shader will look like this:
precision mediump float;
varying vec2 v_texCoord;
uniform vec4 u_SelectedTexture;
uniform sampler2D u_tex0;
uniform sampler2D u_tex1;
uniform sampler2D u_tex2;
uniform sampler2D u_tex3;
void main() {
glFragColor = texture2D(u_tex0, v_texCoord) * uSelectedTexture.x
+ texture2D(u_tex1, v_texCoord) * uSelectedTexture.y +
texture2D(u_tex2, v_texCoord) * uSelectedTexture.z +
texture2D(u_tex3, v_texCoord) * uSelectedTexture.w;
}
Where u_SelectedTexture is homogeneous, I suggest you add.
In the rendering pipeline, if you want to use u_tex1, for example, you should use something like before drawing:
glUniform4f(uniformLocation, 0.0f, 1.0f, 0.0f, 0.0f);
These are independent text readings and shouldn't have a big impact on render times.
I hope he answers your question.
Respectfully!
source to share