"It is better to be blind than to see things from only one point of view."
-Sabrina Jeffries, Romance Novelist
To track those interest points over the video, we cannot simply find interest points again on every frame, because they might not match up pair-wise as we would like. Instead, we need to compute optical flow between each pair of frames, and use the optical flow values to move interest points across the video. I implemented the Kanade-Lucas-Tomasi tracker as described in the handout. For each pixel, it uses a first order Taylor approximation of the change in image intensity in a region around that pixel and minimizes the difference between that and the actual change in intensity from one frame to the next in the same region. The solution to this minimization is the direction that a given pixel moves from one frame to the next.
Once optical flow is computed for every pair of frames, we can track the movement of the Harris corner features from the first frame as they move across the image by simply looking up optical flow and moving the feature positions for each frame. Below are the paths that 20 random tracked features take across the video laid over the first frame.
One issue is dealing with points that move off the edge of the image as they are tracked with optical flow. I chose to just get rid of those features, below is an image that shows all the features that I removed and their paths that led them off the edge.