Using kinect to track a hand without a body image

I was just learning to grow on Kinect. I know that his algorithm uses a probability distribution on individual pixels to determine a body part in a region.

My question is, could I access the arm position without the Kinect by seeing the whole body? How about when the body is too close to the device or mostly hidden? I am using KinectSDK from Microsoft.

+3


source to share


4 answers


I'm not sure if this is possible in the MS Kinect SDK, but by using the new "beta" OpenNI SDK you can track users without fully viewing / calibrating them and accessing hand points, there are others that provide body tracking SDKs that you can try (OMEK is one).



+3


source


You can - I'm using SimpleOpenNI with processing, but the MS SDK should let you use a similar method.

First you need to enable hand tracking:

  kinect.enableGesture ();
  kinect.enableHands ();
  kinect.addGesture ("RaiseHand"); //Wave, Swipe, RaiseHand and Click

      



then use the below methods - note that you do not need to use the convertRealWorldToProjective method if you are interested in 3D data.

  void onCreateHands(int HandId, PVector position, float time) {
    PVector convHand = new PVector ();
    thisHand = position;
    kinect.convertRealWorldToProjective(position, convHand);
    currHand = convHand;
  }

  void onUpdateHands(int HandId, PVector position, float time) {
    PVector convHand = new PVector ();
    thisHand = position;
    kinect.convertRealWorldToProjective(position, convHand);
    currHand = convHand;
  }

  void onDestroyHands(int HandId, PVector position, float time) {
    PVector convHand = new PVector ();
    kinect.convertRealWorldToProjective(position, convHand);
    currHand = new PVector (0, 0, 0);
    }

  void onRecognizeGesture (String strGesture, PVector idPosition, PVector endPosition)       {
    kinect.startTrackingHands (endPosition);
    kinect.removeGesture ("RaiseHand");
  }

      

Hope it helps!

+1


source


I don't know if you want it or not, but you can try to raise your hand in front of the knife, making it the closest object. then enter the depth map and find the nearest pixels by setting a different color value to black

0


source


You can segment pixels of a specific area of ​​your body (such as arms, thighs, etc.) using depth frames from Kinect and compare their pixel positions to skeletal segments. A depth pixel can be assigned to a specific skeleton segment if its distance from it is the smallest compared to other skeleton segments. It is important that you calculate the distance from line segments and not from joints (i.e. points).

This example shows the segmentation of the torso, head, arms, forearms, thighs, and legs in real time: enter image description here

You can read more technical details on how to implement it in this Kinect Avatars article .

0


source







All Articles