Invariant scale geometry
I am writing a mesh editor where I have manipulators with which I modify the vertices of the mesh. The goal is to display handles with constant dimensions that did not change when changing camera and view parameters. The projection matrix is perspective. I will be grateful for ideas on how to implement geometry of invariant scale.
source to share
If I understand correctly, you want to display some markers (for example, the editable region of the vertices) with the same visual size for whatever depth they are mapped to.
There are 2 approaches for this:
-
scale with depth
calculate the perpendicular distance to the camera view (simple point product) and scale the marker size so that it has the same visual size as at depth.
So, if
P0
is the position of your camera, andZ
is the vector of the camera's viewing direction block (usually the z-axis). Then, for any position,P
calculate the scale as follows:depth = dot(P-P0,Z)
Now the scale depends on the desired visual
size0
in some givendepth0
. Now, using a triangle, we want:size/dept = size0/depth0 size = size0*depth/depth0
make your marker
size
or scaledepth/depth0
. In the case of using scaling, you need to scale your target positionP
, otherwise the marker will move to the sides (so translate, scale, translate back). -
calculate screen position and use non-perspective rendering
to transform the target coordinates in the same way as the graphics pipeline, until you get the screen position
x,y
. Keep this in mind and in the pass, which will make your markers just use this instead of the actual position. For this rendering pass, use some constant depth (distance from camera) or use a non-perspective matrix.
See Understanding 4x4 homogeneous transformation matrices for details
[Edit1] pixel size
for this you need to use FOVx,FOVy
projection angles and screen / view resolution (xs, ys). This means that if the depth znear
and coordinate are at half the corner, then the projected coordinate will jump to the edge of the screen:
tan(FOVx/2) = (xs/2)*pixelx/znear
tan(FOVy/2) = (ys/2)*pixely/znear
---------------------------------
pixelx = 2*znear*tan(FOVx/2)/xs
pixely = 2*znear*tan(FOVy/2)/ys
Where pixelx,pixely
is the size (per axis) representing one pixel visually at depth znear
. If the booth sizes are the same (pixel is square), you have everything you need. In case they are not equal (the pixel is not square), you need to display the markers in aligned coordinates of the screen axis, so approach # 2 is more suitable for this case.
So, if you choose depth0=znear
, you can set size0
like n*pixelx
and / or n*pixely
to get the visual n
pixel size . Or use any dept0
and rewrite the calculation to:
pixelx = 2*depth0*tan(FOVx/2)/xs
pixely = 2*depth0*tan(FOVy/2)/ys
Just to be complete:
size0x = size_in_pixels*(2*depth0*tan(FOVx/2)/xs) size0y = size_in_pixels*(2*depth0*tan(FOVy/2)/ys) ------------------------------------------------- sizex = size_in_pixels*(2*depth0*tan(FOVx/2)/xs)*(depth/depth0) sizey = size_in_pixels*(2*depth0*tan(FOVy/2)/ys)*(depth/depth0) --------------------------------------------------------------- sizex = size_in_pixels*(2*tan(FOVx/2)/xs)*(depth) sizey = size_in_pixels*(2*tan(FOVy/2)/ys)*(depth) --------------------------------------------------------------- sizex = size_in_pixels*2*depth*tan(FOVx/2)/xs sizey = size_in_pixels*2*depth*tan(FOVy/2)/ys
source to share