Action recognition using global spatio-temporal features derived from sparse representations

Guruprasad Somasundaram, Anoop Cherian, Vassilios Morellas, Nikolaos P Papanikolopoulos

Research output: Contribution to journalArticle

27 Citations (Scopus)

Abstract

Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.

Original languageEnglish (US)
Pages (from-to)1-13
Number of pages13
JournalComputer Vision and Image Understanding
Volume123
DOIs
StatePublished - Jan 1 2014

Fingerprint

Glossaries
Computer vision
Image processing
Detectors
Experiments

Keywords

  • Action classification
  • Activity recognition
  • Global spatio-temporal features

Cite this

Action recognition using global spatio-temporal features derived from sparse representations. / Somasundaram, Guruprasad; Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos P.

In: Computer Vision and Image Understanding, Vol. 123, 01.01.2014, p. 1-13.

Research output: Contribution to journalArticle

@article{3d481a69728a4985925b6f315adf1907,
title = "Action recognition using global spatio-temporal features derived from sparse representations",
abstract = "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.",
keywords = "Action classification, Activity recognition, Global spatio-temporal features",
author = "Guruprasad Somasundaram and Anoop Cherian and Vassilios Morellas and Papanikolopoulos, {Nikolaos P}",
year = "2014",
month = "1",
day = "1",
doi = "10.1016/j.cviu.2014.01.002",
language = "English (US)",
volume = "123",
pages = "1--13",
journal = "Computer Vision and Image Understanding",
issn = "1077-3142",
publisher = "Academic Press Inc.",

}

TY - JOUR

T1 - Action recognition using global spatio-temporal features derived from sparse representations

AU - Somasundaram, Guruprasad

AU - Cherian, Anoop

AU - Morellas, Vassilios

AU - Papanikolopoulos, Nikolaos P

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.

AB - Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal self-similarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, and HOF), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.

KW - Action classification

KW - Activity recognition

KW - Global spatio-temporal features

UR - http://www.scopus.com/inward/record.url?scp=84899639091&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84899639091&partnerID=8YFLogxK

U2 - 10.1016/j.cviu.2014.01.002

DO - 10.1016/j.cviu.2014.01.002

M3 - Article

VL - 123

SP - 1

EP - 13

JO - Computer Vision and Image Understanding

JF - Computer Vision and Image Understanding

SN - 1077-3142

ER -