A deep audio-visual model for efficient dynamic video summarization

Gamal El-Nagar, Ahmed El-Sawy, Metwally Rashad

Research output: Contribution to journalArticlepeer-review

Abstract

The adage “a picture is worth a thousand words” resonates in the digital video domain, suggesting that a video could be seen as a composition of millions of these words. Videos are composed of countless frames. Video summarization creates cohesive visual units in scenes by condensing shots from segments. Video summarization gains prominence by condensing lengthy videos while retaining crucial content. Despite effective techniques using keyframes or keyshots in video summarization, integrating audio components is imperative. This paper focuses on integrating deep learning techniques to generate dynamic summaries enriched with audio. To address that gap, an efficient model employs audio-visual features, enriching summarization for more robust and informative video summaries. The model selects keyshots based on their significance scores, safeguarding essential content. Assigning these scores to specific video shots is a pivotal yet demanding task for video summarization. The model's evaluation occurs on benchmark datasets, TVSum and SumMe. Experimental outcomes reveal its efficacy, showcasing considerable performance enhancements. On the TVSum, SumMe datasets, an F-Score metric of 79.33% and 66.78%, respectively, is achieved, surpassing previous state-of-the-art techniques.

Original languageEnglish
Article number104130
JournalJournal of Visual Communication and Image Representation
Volume100
DOIs
StatePublished - Apr 2024

Keywords

  • SumMe
  • TVSum
  • VGGish
  • Video Skimming
  • Visualization of score curves

Fingerprint

Dive into the research topics of 'A deep audio-visual model for efficient dynamic video summarization'. Together they form a unique fingerprint.

Cite this