Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos

Risultato della ricerca: Conference contribution

Abstract

This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360◦ videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360◦ video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.
Lingua originaleEnglish
Titolo della pubblicazione ospiteImage Analysis and Processing ICIAP 2019 - LNCS 11751
Pagine36-47
Numero di pagine12
Stato di pubblicazionePublished - 2019

Serie di pubblicazioni

NomeLECTURE NOTES IN COMPUTER SCIENCE

Fingerprint

Cameras
Neural networks
Experiments

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cita questo

La Cascia, M., & Lo Presti, L. (2019). Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. In Image Analysis and Processing ICIAP 2019 - LNCS 11751 (pagg. 36-47). (LECTURE NOTES IN COMPUTER SCIENCE).

Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. / La Cascia, Marco; Lo Presti, Liliana.

Image Analysis and Processing ICIAP 2019 - LNCS 11751. 2019. pag. 36-47 (LECTURE NOTES IN COMPUTER SCIENCE).

Risultato della ricerca: Conference contribution

La Cascia, M & Lo Presti, L 2019, Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. in Image Analysis and Processing ICIAP 2019 - LNCS 11751. LECTURE NOTES IN COMPUTER SCIENCE, pagg. 36-47.
La Cascia M, Lo Presti L. Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. In Image Analysis and Processing ICIAP 2019 - LNCS 11751. 2019. pag. 36-47. (LECTURE NOTES IN COMPUTER SCIENCE).
La Cascia, Marco ; Lo Presti, Liliana. / Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. Image Analysis and Processing ICIAP 2019 - LNCS 11751. 2019. pagg. 36-47 (LECTURE NOTES IN COMPUTER SCIENCE).
@inproceedings{ce004a85d7324866b85b11c0ad1db480,
title = "Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos",
abstract = "This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360◦ videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360◦ video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.",
author = "{La Cascia}, Marco and {Lo Presti}, Liliana",
year = "2019",
language = "English",
isbn = "978-303030641-0",
series = "LECTURE NOTES IN COMPUTER SCIENCE",
pages = "36--47",
booktitle = "Image Analysis and Processing ICIAP 2019 - LNCS 11751",

}

TY - GEN

T1 - Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos

AU - La Cascia, Marco

AU - Lo Presti, Liliana

PY - 2019

Y1 - 2019

N2 - This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360◦ videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360◦ video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.

AB - This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360◦ videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360◦ video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.

UR - http://hdl.handle.net/10447/385075

M3 - Conference contribution

SN - 978-303030641-0

T3 - LECTURE NOTES IN COMPUTER SCIENCE

SP - 36

EP - 47

BT - Image Analysis and Processing ICIAP 2019 - LNCS 11751

ER -