Outline of how the program currently works:
The GUI that is available for download allows the user to set an image of themselves as a "password" and then can be switched to "live" mode which takes a picture every 30 frames and compares the image to the user defined password. The password is set by taking a certain number of images of the user in a certain pose(we suggest 10-20) the user should move their body without physically changing their distance from the camera before going back to their pose and taking the next image. This is critical so that we can calculate an allowable threshold for future login attempts since it is impossible for the user to exactly recreate their password. The password becomes the average (X,Y,Z) values for each point in the training set.
As was alluded to above, it is not possible for a user to exactly replicate their password due to human error and hardware error. The kinect measures distances in mm and does not always assign the same exact point in space as a feature point (this occurs frequently with the head, which is a large space from which to pick an exact point as the feature point). So the training set of 10-20 images along with predefined constants are used to calculate an allowable threshold within which the password is considered to be valid. For each feature point we go through the training set and calculate the largest X, Y, and Z distances (seperately) between all the feature points for a given body part. This largest distance is considered the "worst" the user did to recreate their password and we use it to be the allowable X,Y,Z thresholds. Without allowing some variability between the password and future login attempts, it would be impossible for the user to recreate his/her password.