Learning semantic scene models from observing activity in visual surveillance

    Research output: Contribution to journalArticlepeer-review

    2 Downloads (Pure)

    Abstract

    This paper considers the problem of automatically learning an activity-based semantic scene model from a stream of video data. A scene model is proposed that labels regions according to an identifiable activity in each region, such as entry/exit zones, junctions, paths, and stop zones. We present several unsupervised methods that learn these scene elements and present results that show the efficiency of our approach. Finally, we describe how the models can be used to support the interpretation of moving objects in a visual surveillance environment.
    Original languageEnglish
    Pages (from-to)397-408
    JournalIEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
    Volume35
    Issue number3
    DOIs
    Publication statusPublished - Jun 2005

    Bibliographical note

    Note: This work was supported by the Engineering and Physical Sciences Research Council [grant number GR/M58030].

    Keywords

    • Computer science and informatics

    Fingerprint

    Dive into the research topics of 'Learning semantic scene models from observing activity in visual surveillance'. Together they form a unique fingerprint.

    Cite this