Analysis of gesture and action in technical talks for video indexing

(with Shanon Ju)

In this paper, we present an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing and we use active contours to automatically track these potential gestures. Given the constrained domain we define a simple ``vocabulary'' of actions which can easily be recognized based on the active contour shape and motion. The recognized actions provide a rich annotation of the sequence that can be used to access a condensed version of the talk from a web page.

The system can be summarized by the following diagram:

To illusrate consider this example from a technical talk

Related Publications

Ju, S. X., Black, M. J., Minneman, S., and Kimber, D., Summarization of video-taped presentations: Automatic analysis of motion and gesture, IEEE Trans. on Circuits and Systems for Video Technology. Vol. 8, No. 5, Sept. 1998, pp. 686-696. (postscript),

Ju, S. X., Black, M. J., Minneman, S., and Kimber, D., Analysis of gesture and action in technical talks for video indexing, IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-97, Puerto Rico, June 1997, pp. 595-601; also in AAAI Spring Symposium'97: Intelligent Integration and Use of Text, Image, Video and Audio Corpora, March 1997, pp. 25-31. (postscript)