Does the glUniform / glVertexAttribPointer type have to match the declared type in the shader?
To what extent should the types match between GLSL code and the native code that passes the data?
For example, let's say I have a shader code:
uniform float uFloat;
uniform int uInt;
in float aFloat;
in int aInt;
and its own pseudocode (I leave a lot of boring stuff):
glUniform1i(glUniformLocation("uFloat"), 10)
glUniform1f(glUniformLocation("uInt"), 1.414)
glBufferData(int[1, 2, 3, ...])
glVertexAttribPointer(glAttribLocation("aFloat"), 0, GL_INT)
glBufferData(float[1.1, 2.2, 3.3, ...])
glVertexAttribPointer(glAttribLocation("aInt"), 0, GL_FLOAT)
Thus, the notification types are the same within the client code, but they are not the same as the declared types in the shader code.
What I'm asking is, would you say the shader values ββat a logical level or in terms of bit representation?
(I know I feel like I can just test this, but it's hard to debug shaders and I feel like I have inconsistent behavior, so I'm just looking for the final information.)
source to share
The parameter type
for glVertexAttribPointer
specifies the type of data inside the buffer. If it is an integer type, the values ββare automatically converted to floating point values ββby the GPU when the vertex is read, either by converting them directly to floating point values ββif normalized
equal GL_FALSE
, or by dividing them by the maximum integer value if normalized
- GL_TRUE
.
However glVertexAttribPointer
only works on inputs in float
(even if you are passing an integer buffer type), but in the last line you are using it with in int
. For this you must use glVertexAttribIPointer
(note on I
). Although it has a parameter type
, it does not accept floating point formats such as GL_FLOAT
; you cannot pass a floating point buffer to an integer vertex attribute.
(There is also a glVertexAttribLPointer
for double
s, but they are not often used.)
The uniform is more strict; you must use the appropriate option glUniform
for you the type (for example, glUniform1i
to int
and glUniform1f
for the float
type), otherwise you'll get an error GL_INVALID_OPERATION
.
source to share