Resampling of cesium

I know Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite and Lagrange. These methods can be used to resample points and / or create curves that approximate selected points, etc.

However, I have a question, what method does the cesium internally use when it renders a 3D scene and the user is zooming / panning all over the place? This is not the case where the programmer has access to the raster and so on, so you can't just in the middle of it all and call the interpolation functions directly. Cesium does its job as quickly as it can in response to user control.

My guess is that the default is bilinear, but I don't know that and I can't find any documentation that explicitly states what is used. Also, is there a way to get Cesium to use a specific oversampling technique during these activities, such as Lagrange resampling? This is actually what I need to do: force Cesium to use Lagrange resampling during scene rendering. Any suggestions would be appreciated.

EDIT: here's a more detailed description of the problem ...

Suppose I am using Cesium to create a 3D model of the Earth, including a grayscale image chip in its correct location on the surface of the Earth model, and then I display the results in a Cesium window. If the viewpoint is far enough from the surface of the Earth, then the number of pixels displayed in the pixel portion will be less than the actual number of pixels available at the pixel source. Some downsampling will be done. Likewise, if the user zooms in multiple times, there is a point where there are more pixels on the image chip than the actual number of pixels in the pixel source. Some sample will appear. In general, every time Cesium draws a frame that includes the pixel data source, resampling occurs.This could be nearest neighbor (doubt), linear (possibly), cubic, Lagrangian, Hermitian, or any of several different resampling methods. At my company, we use cesium for a major government program that requires Lagrange resampling to ensure image quality. (The NGA found this to be the best for their software and analytics tools, and they made this a compliance requirement, so we have no choice.)(The NGA found this to be the best for their software and analytics tools, and they made this a compliance requirement, so we have no choice.)(The NGA found this to be the best for their software and analytics tools, and they made this a compliance requirement, so we have no choice.)

Thus, the problem is: during user interaction with the model, for example scaling, the drawing process is not under the control of the programmer. Resampling happens either in the cesium layer itself (hopefully) or even in even lower layers (like the WebGL features that Cesium can rely on). So I don't know what method is used for this oversampling. Even worse, if this technique is not Lagrange, then I have no clue on how to change it.

So the question would be: Does Cesium do resampling explicitly? If so, what method does it use? If not, what drawing packages and functions does Cesium rely on to make the image file on the map? (I can try to dig and determine what methods these layers might use and / or have.)

+3


source to share


1 answer


UPDATE: Wow, my original answer was a complete misunderstanding of your question, so I rewrote from scratch.

With the new changes, it is clear that your question is how the images will be re-selected for the screen during rendering. These images are texture maps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware, on the video card itself. The software on the processor is not enough to display individual pixels on the screen one at a time, which is why we have hardware accelerated 3D maps.

Now for the bad news: this hardware supports Nearest Neighbor, Linear, and mapmapping. It. The 3D graphics cards do not use any kind of interferometry, as it needs to be done in a split second to keep the frame rate as low as possible.

Mapmapping is well described by @gman in his article WebGL 3D Textures . This is a long article, but look for the word "mipmap" and skip its description. Basically downscaled one image to smaller images before rendering, so an appropriately sized origin point can be selected at render time. But always be the final display on the screen, and as you can see, the choice is CLOSE or LINE.



Quoting @gman here:

You can choose what WebGL does by setting texture filtering for each texture. There are 6 modes

  • NEAREST

    = choose 1 pixel from the largest mip
  • LINEAR

    = select 4 pixels from the largest mip and blend them
  • NEAREST_MIPMAP_NEAREST

    = pick the best mip then pick one pixel from that mip
  • LINEAR_MIPMAP_NEAREST

    = choose the best mip then mix 4 pixels with this mip
  • NEAREST_MIPMAP_LINEAR

    = pick the best 2 mips, pick 1 pixel from each, mix them
  • LINEAR_MIPMAP_LINEAR

    = choose the best 2 mips. select 4 pixels from each, mix them

I think the best news I can give you is that Cesium uses the best of themLINEAR_MIPMAP_LINEAR

to make its own rendering. If you have a strict requirement for more time-consuming image interpolation, it means you have a requirement not to use a 3D hardware accelerated real-time graphics card, as there is no way to interpolate Lagrange images during real-time rendering.

+2


source







All Articles