Building the howto100m Video Corpus
Published August 19, 2019
|
22 min
    Download
    Add to queue
    Copy URL
    Show notes

    Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.

    This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.

    Related Links

    The paper will be presented at ICCV 2019

    @antoine77340

    Antoine on Github

    Antoine's homepage

      15
      15
        0:00:00 / 0:00:00