Large streams of 3D scene

I am working on a 3D engine suitable for a very large scene. The apert of the rendering itself (culling truncation, culling occlusion, etc.) I'm wondering what is the best solution for managing the scene.

The data is given as a huge list of 3D meshes, with no relationship between them, so I can't create portals, I guess ...

The main goal is to run this engine on systems with low RAM (500MB-1GB) and the scenes loaded into it are very large and can contain millions of triangles resulting in very memory intensive use. I'm actually working with a free octet right now, built on boot, it works well on small to medium scenes, but many scenes are just huge to fit completely into memory, so here's my question:

How would you handle scenes for loading and unloading chunks dynamically (and ideally smoothly) and on what could you determine if a chunk should be loaded / unloaded? I can create a custom file format if needed, as the scenes are exported using a custom exporter in well-known 3D authoring tools.

Important information: Many scenes cannot be effectively closed due to their construction. Example: a very large network of pipes, therefore not much occlusion, but a very large number of elements.

+3


source to share


2 answers


I think the best solution would be a "package of solutions", a package of different technologies.



  • Level of Detail (LOD) can reduce memory footprint if unused levels are not loaded. It can be changed more or less easily by using an alpha blend between the old and new part. The simplest controller will use the distance of the grid to the camera.
  • Freeing host memory (RAM) when an object is loaded onto the GPU (device) and obviously frees all uncompressed memory (OpenGL resources too). Valgrind can help you with this.
  • Use low quality meshes and use tessellation to improve image quality.
  • Use VBO indexing, this should reduce VRAM usage and improve performance.
  • Do not use grids if possible, terrain can be displayed using height maps. Some things can be procedurally generated.
  • Use bump or / and normalmaps. This will improve the quality, then you can reduce the number of vertices.
  • Divide these "pipes" into different cells.
  • Fake 3D meshes with 2D images: impostors, skydomes ...
0


source


If a huge number of plungers will be used by textures, there are commercial packages like GraniteSDK that offer seamless LOD-based texture streams using a virtual texture cache. See http://graphinesoftware.com/granite . Alternatively you can look at http://ir-ltd.net/

You can actually use the same technique to build poly on the fly from texture data in a shader, but this is a little more complicated.



For voxels, there are methods for constructing oct trees entirely in GPU memory, as well as I / O parts that you really need. The rendering can then be done using raycasting. See this post: Use octree to organize 3D volume data in GPU , http://www.icare3d.org/research/GTC2012_Voxelization_public.pdf and http://www.cse.chalmers.se/~kampe/highResolutionSparseVoxelDAGs. pdf

It all depends on how static the scene is going to be, and by following that, how well you can pre-bake the data to suit your rendering needs. It will help already if you can define front visibility constraints (like Google Potential Visiblity Sets) and organize it so you can pass it on demand. Since the renderer will have limitations, you always have a strategy to place a chunk of data in GPU memory as quickly and accurately as possible.

0


source







All Articles