Publication | Closed Access
Video Segmentation with Just a Few Strokes
136
Citations
24
References
2015
Year
Unknown Venue
Scene AnalysisEngineeringMachine LearningVideo ProcessingMotion SegmentationImage Sequence AnalysisImage AnalysisData SciencePattern RecognitionVideo Content AnalysisRobot LearningMachine VisionComputer ScienceVideo UnderstandingDeep LearningComputer VisionVideo SegmentationScene UnderstandingSegmentation QualityVideo Hallucination
Video segmentation requires annotated datasets, yet challenges such as disocclusions and stationary objects hinder frame‑to‑frame propagation. The proposed method adds a color‑consistency constraint, evaluates user effort versus segmentation quality on ego‑motion videos, and benchmarks against diverse algorithms. Combining motion from point trajectories with minimal supervision largely resolves these issues.
As the use of videos is becoming more popular in computer vision, the need for annotated video datasets increases. Such datasets are required either as training data or simply as ground truth for benchmark datasets. A particular challenge in video segmentation is due to disocclusions, which hamper frame-to-frame propagation, in conjunction with non-moving objects. We show that a combination of motion from point trajectories, as known from motion segmentation, along with minimal supervision can largely help solve this problem. Moreover, we integrate a new constraint that enforces consistency of the color distribution in successive frames. We quantify user interaction effort with respect to segmentation quality on challenging ego motion videos. We compare our approach to a diverse set of algorithms in terms of user effort and in terms of performance on common video segmentation benchmarks.
| Year | Citations | |
|---|---|---|
Page 1
Page 1