Effectiveness of perspective projection versus ray / ray tracing

I have a very general question. I want to define the endpoints of a series of objects (containing 30-50 closed polygons (z), each with about 300 points (x, y, z)). I am working with a fixed viewport that rotates around the x, y and z-axes (alpha, beta, gamma) for the system origin for polygons.

As I see it, there are two possibilities: perspective projection or ray tracing. Perspective projection seems to require a large number of matrix operations for each point to determine its position within or without the viewport. Or given the large number of points, would I be better off drawing viewport pixels for the object? that is, determine if an intersection exists and if an intersection occurs within or without the object (s). Anyway, I'll write this result as 0 (outside) or 1 (inside) to a 200x200 integer matrix representing the viewport

Thanks in anticipation

+2


source to share


2 answers


Perspective projection (and then scanning-transforming the polygons into image coordinates) will be much faster.



The matrix transformation, which is required in the case of perspective projection (mainly to the matrix from the world to the camera), is required in the same way as with ray tracing. However, with perspective projection, you are only transforming the corner points, whereas with raytracing, you are transforming all the points in the image.

+6


source


Should you be able to use perspective projection and perspective projection matrix to calculate the position of vertices in screen space? It is difficult to know what you really want to do. If you want to render an image of this 3D scene then with only a few polygons it will be hard to see any difference between ray tracing and rasterization if your code is optimized (you will still have to use an acceleration framework for the ray tracing approach), however yes rasterization is more likely everything will be faster.

Now if you need to calculate the distance between the eyes (the origin of the camera) and the geometry seen through the camera view, I don't understand why you can't use the depth value of any sample for any pixel in the image and use the inverse perspective projection matrix to find its distance in camera space.



Why is the speed of the problem in your problem? If not, really use RT.

Most of this information can be found at www.scratchapixel.com

0


source







All Articles