PointCloud with Multiple Kinects
I am trying to render PointCloud display with multiple cinectors in processing. I get a custom front and back with 2 cinectors on opposite sides and generate both PointClouds.
The problem is that the PointClouds X / Y / Z are out of sync, it just displays them on screen and it certainly looks messy. Is there a way to calculate or make a comparison between the two to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors, it goes out again.
source to share
Assuming all Kinect are stationary, I think you will have to go in this order:
- decide which Kinect to use as a global link,
- get the parameters for the 3D transform for each of the other Kinects - I would try using PMatrix3D and applyMatrix (), although it might be slow,
- apply transformations to each of the other kinetic cloud clouds and draw the clouds
I don't know yet how to get the transform parameters for the Procrustes transform , but assuming they don't change, you probably need to set up multiple anchor points, perhaps displaying point clouds from each pair of Kinects, and registering the points that you know are the same in both point clouds. Once you've got enough of them, build PMatrix3D and apply it inside push / popMatrix. This is the approach this guy uses: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use the Iterative Closest Point algorithm and build a 3D transform from its output. I would really love the ICP or PCL library for processing if anyone knows a good one.
source to share