Create a 3D facial mesh for the Kinects depth field

The Kinect SDK comes with an example of creating a color-stream facial mesh. It looks like this:

http://imgur.com/TV6dHBC

I want to create a 3D mesh for the depth flow

My code looks like this:

private EnumIndexableCollection<FeaturePoint, PointF> facePoints;
private EnumIndexableCollection<FeaturePoint, Vector3DF> depthPoints;

public void DrawFaceModel(DrawingContext drawingContext)
{
    if (!this.lastFaceTrackSucceeded || this.skeletonTrackingState != SkeletonTrackingState.Tracked)
        return;

    var faceModelPts = new List<Point>();
    var faceModelPts3D = new List<Point3D>();
    var faceModel = new List<FaceModelTriangle>();
    var faceModel3D = new List<FaceModelTriangle3D>();

    for (int i = 0; i < this.facePoints.Count; i++)
    {
        faceModelPts3D.Add(new Point3D(this.depthPoints[i].X + 0.5f, this.depthPoints[i].Y + 0.5f, this.depthPoints[i].Z + 0.5f));
    }

    FaceDataPoints.Number_of_Points = this.facePoints.Count;

    foreach (var t in ImageData.faceTriangles)
    {
        var triangle = new FaceModelTriangle3D();
        triangle.Point1_3D = faceModelPts3D[t.First];
        triangle.Point2_3D = faceModelPts3D[t.Second];
        triangle.Point3_3D = faceModelPts3D[t.Third];
        faceModel3D.Add(triangle);
    }

    var faceModelGroup = new GeometryGroup();
    for (int i = 0; i < faceModel.Count; i++)
    {
        var faceTriangle = new GeometryGroup();  
        faceTriangle.Children.Add(new LineGeometry(faceModel3D[i].Point1_3D, faceModel3D[i].Point2_3D)); 
        faceTriangle.Children.Add(new LineGeometry(faceModel3D[i].Point2_3D, faceModel3D[i].Point3_3D));
        faceTriangle.Children.Add(new LineGeometry(faceModel3D[i].Point3_3D, faceModel3D[i].Point1_3D));
        faceModelGroup.Children.Add(faceTriangle); //Add lines to image
    }

    drawingContext.DrawGeometry(Brushes.LightYellow, new Pen(Brushes.LightYellow, 1.0), faceModelGroup);
}

private struct FaceModelTriangle3D
{
     public Point3D Point1_3D;
     public Point3D Point2_3D;
     public Point3D Point3_3D;
}

      

I am currently getting the error "Error 2 Argument 1: Unable to convert from" System.Windows.Media.Media3D.Point3D "to" System.Windows.Point "F: \ Work \ Uni \ 4th Year \ Final Year Project \ Project \ Project 3.0 \ Project 3.0 \ FaceTrackingViewer.xaml.cs 275 68 Project 3.0 "

This is caused by:

(new LineGeometry(faceModel3D[i].Point2_3D, faceModel3D[i].Point3_3D));

      

What do I need to use instead of a LineGeometry to get this working, or is there a much more efficient way to do this?

Also, once I've created the face mesh, I also want to be able to store this information so that I can calculate the distance between points on the face. So how can I store triangle information?

+3


source to share


3 answers


Face Tracking Basics The WPF example uses Point2D, but you're using Point3D.

This means that you are passing a 3D coordinate to the LineGeometry constructor, which is for 2D drawing. You need to convert the 3D world coordinate to 2D screen coordinate; a process known as projection.

WPF includes a perspective camera class to match the job



System.Windows.Media.Media3D.PerspectiveCamera

      

Take a look at creating a 3D scene in WPF - MSDN

0


source


It is true that you need to design from 3D to 2D. However, instead of rolling your own projection, you should use the calibrated Kinect projection method:

Microsoft.Kinect.CoordinateMapper.MapSkeletonPointToDepthPoint()

      



See http://msdn.microsoft.com/en-us/library/jj883696.aspx

0


source


How to get mesh from kinect fracetrack?

And for the second question:
Triangle information gives you faces, so if you want to keep them for example. a.obj, then

  • Vertices: vxyz
    (x, y, z should be replaced with appropriate numbers representing the coordinates, while v means you are specifying vertices)

  • Faces from a triangle: f i jk
    (x, y, z should be replaced with appropriate numbers representing coordinates, while f indicates that you are specifying faces)

and you have your .obj

If you kept it in a program, I would suggest two containers (maybe an array or a linked list depending on what you want), one for the vertices, one for the entities, or a class to hold all.

0


source







All Articles