This paper presents an interactive matting approach for efficiently extracting alpha mattes and foreground objects from video sequences. Beginning from user-specified strokes across space and time, the paper formulates their expansions in the video volume as a Laplacian equation, resulting in a coarse alpha matte. It then employs a novel spacetime alpha matting technique that makes use of local statistics and neighboring information and converges to a globally optimal alpha matte in few iterations. Finally, the paper derives a new global cost function to reconstruct the foreground color of the whole video volume, which faithfully preserves the spatio-temporal coherence. The computation in each step can be reformulated as solving a set of linear equations, allowing users to quickly extract high-quality alpha matte and foreground objects, even for data sets with ten million pixels. Experimental results on complex natural video sequences demonstrate the high quality and efficiency of the proposed approach.