Episodic reasoning for vision-based human action recognition

    Research output: Contribution to journalArticlepeer-review

    2 Downloads (Pure)

    Abstract

    Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning.
    Original languageEnglish
    JournalScientific World Journal
    Volume2014
    Issue number270171
    DOIs
    Publication statusPublished - May 2014

    Bibliographical note

    Note: This work was supported by Engineering and Physical Sciences Research Council [grant number EP/E001025/1].

    Keywords

    • Computer science and informatics

    Fingerprint

    Dive into the research topics of 'Episodic reasoning for vision-based human action recognition'. Together they form a unique fingerprint.
    • Human Pose and Action Recognition

      Nebel, J.-C. (CoPI), Makris, D. (CoPI), Kuo, P. (Researcher), Lewandowski, M. (Researcher), Velastin, S. A. (CoI), Nazir, S. (CoI), Santofimia, M. J. (CoI) & Del Rincon, J. M. (Researcher)

      5/09/0721/06/19

      Project: Research & KE

    Cite this