JS processing with big data

My goal is to create an interactive web data visualization from motion tracking experiments.

The paths of moving objects are displayed as points connected by lines. Visualization allows the user to pan and scale the data.

My prototype is using Processing.js because I am familiar with processing, but I ran into performance issues when drawing data with more than 10,000 vertices or lines. I've pursued a couple of strategies to implement panning and zooming, but the current implementation that I think is the best is to store the data as an svg image and use the PShape data type in Processing.js to load, draw, scale, and translate the data. Cleaned up version of the code:

/* @pjs preload="nanoparticle_trajs.svg"; */
PShape trajs;

void setup() 
{
  size(900, 600);

  trajs = loadShape("nanoparticle_trajs.svg");

}

//function that repeats and draws elements to the canvas
void draw() 
{

  shape(trajs,centerX,centerY,imgW,imgH);

}

//...additional functions that get mouse events

      

Perhaps I shouldn't expect to work quickly with so many data points, but are there general strategies for optimizing the display of complex svg elements with Processing.js? What would I do if I wanted to display 100,000 vertices and lines? Should I opt out of processing everything together?

thank

EDIT

After reading the following answer, I thought the image would help convey the essence of the rendering:
Screen capture of the currently implemented visualization

It is essentially a scatter plot of s> 10,000 points and connecting lines. The user can pan and scale the data, and the scale bar in the upper left corner is dynamically updated to match the current zoom level.

+3


source to share


1 answer


Here's my feed:

Zoom level grouping and data breakdown as your users focus / scale

I would suggest you concatenate some data and represent it as simple node

When approaching a specific node, you can break the node and let go of the group, thereby showing its details.

This way you limit the amount of data that you need to display in the zoomed views (where all the nodes will be shown) and you add detail as the user zooms in to the region, in which case not all nodes will show as the zoom only focuses on one area your schedule.

View limit

Determine what is in the current view area and draw exactly that. Avoid drawing the entire graph structure of a node unless your user can see it in their viewport. Show only what is needed. While I suspect Processing.js has done this already, I don't know if your function is using scaling.

Consider bitmap caching if your nodes are live / clickable



If your elements are interactive or interactive, you may want to group the data and display it as bitmaps (large groups of data shown as a single image) until the user clicks on the bitmap, in which case the bitmap is removed and the original shape is redrawn at this location bitmaps. This minimizes the number of dots / lines that the engine has to use for each redraw cycle.

For bitmap caching see the link (this is Fabric.js - canvas and SVG library but the concept / idea is the same) and also this answer I posted one of my questions for interactive vector / bitmap caching


As a side note:

Do you really need to use Processing?

If no interaction or animation is happening and you just want to split pixels (just draw them once) on the canvas, consider dropping the vector library altogether. A simple canvas just zooms in on the pixels on the canvas and that's it. The initial loading drawing of the data may have some lag, but since there isn't any internal reference to the points / shapes / lines after they have been drawn, there is nothing that wastes your resources / clogs your memory.

So, if so, consider switching to a simple canvas. However, data visualizations are all about animation and interactivity, so I doubt you'd want to let them go.

+2


source







All Articles