Please use this identifier to cite or link to this item: http://cmuir.cmu.ac.th/jspui/handle/6653943832/66177
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNarongsak Putpueken_US
dc.contributor.authorNagul Cooharojananoneen_US
dc.contributor.authorChidchanok Lursinsapen_US
dc.contributor.authorShin’ichi Satohen_US
dc.date.accessioned2019-08-21T09:18:23Z-
dc.date.available2019-08-21T09:18:23Z-
dc.date.issued2015en_US
dc.identifier.citationChiang Mai Journal of Science 42, 4 (Oct 2015), 1005 - 1018en_US
dc.identifier.issn0125-2526en_US
dc.identifier.urihttp://it.science.cmu.ac.th/ejournal/dl.php?journal_id=6256en_US
dc.identifier.urihttp://cmuir.cmu.ac.th/jspui/handle/6653943832/66177-
dc.description.abstractAutomatically selecting the important content from rushes video is a challenging task due to the difficulty in eliminating raw data, such as useless content and redundant content. Redundancy elimination is difficult since repetitive segments, which are takes of the same scene, usually have different lengths and motion patterns. In this work, a new methodology is proposed to determine retakes in rushes video. The video is divided into shots by the proposed automatic shot boundary detection using local singular value decomposition and k-means clustering. Shots that contain the useless contents were eliminated by our proposed technique integrated a near duplicated key frame (NDK) algorithm. The local features of each remaining frames were extracted using the scale-invariant feature transform (SIFT) algorithm. The similarity between consecutive frames was calculated using SIFT matching and then converted into a string. The given strings were then concatenated into a longer string sequence to use as a shot representation. The similarity between each pair of sequences was evaluated using the longest common subsequence algorithm. Our method was evaluated in direct comparison with the conventional technique. Overall, when evaluated across the TRECVID 2007 and 2008 data sets that represent diverse styles of rushes videos, the proposed methodology provided a higher degree of accuracy in the detection of retakes in rushes videos.en_US
dc.language.isoEngen_US
dc.publisherScience Faculty of Chiang Mai Universityen_US
dc.subjectrushes videoen_US
dc.subjectsequence matchingen_US
dc.subjectretake detectionen_US
dc.subjectSIFTen_US
dc.subjectLCSen_US
dc.subjectSVDen_US
dc.titleSequence Matching Based Automatic Retake Detection Framework for Rushes Videoen_US
Appears in Collections:CMUL: Journal Articles

Files in This Item:
There are no files associated with this item.


Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.