This project focuses on morphing (and blending) between the faces of our class. By using manually inputed 'feature points' (or 'warp points'), I implemented an algorithm that creates a morphed blend of the 2 images for given input parameters.
Algorithm
The actual facemorph algorithm takes 2 images as input, and morphs them into a single composite image. Using 26 manually input feature points and 'known' points, the algorithm uses a poisson fill to generate the offsets of all other 'unknown' pixels.
The input parameters of the 'facemorph' method are:
2 source images (of faces)
2 sets of points (one for each source image), where each point is the location of one of the 26 warp points I used
warp_ratio: The amount that imageA is warped towards imageB
cross_dissolve: The value of the alpha-mask for imageA in the composite image.
Here we see 2 source images, and the 27 selected warp points for each image
Here is a video of all the face morphs in the data set. Each morph uses 20 frames, and the video framerate is 10fps.
Mean Face
In order to calculate the 'mean face' (average face) for the class data set, I simply computed the average offsets for each of the warp points, and then used this average as the offset for each face. By adjusting the warp_ratio to 1, and the cross_dissolve to 1, I warped each image complete to this mean set of warp point offsets. The result is shown below. Due to significant movement between subjects in the photos, there is a lot of blurring, though the outline of a face is noticeable in the middle of the image.
Here we see 2 source images, and the 27 selected warp points for each image