Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes a deep convolutional neural network (CNN) for pedestrian tracking in 360◦ videos based on the target’s motion. The tracking algorithm takes advantage of a virtual Pan-Tilt-Zoom (vPTZ) camera simulated by means of the 360◦ video. The CNN takes in input a motion image, i.e. the difference of two images taken by using the vPTZ camera at different times by the same pan, tilt and zoom parameters. The CNN predicts the vPTZ camera parameter adjustments required to keep the target at the center of the vPTZ camera view. Experiments on a publicly available dataset performed in cross-validation demonstrate that the learned motion model generalizes, and that the proposed tracking algorithm achieves state-of-the-art performance.
Original languageEnglish
Title of host publicationImage Analysis and Processing ICIAP 2019 - LNCS 11751
Pages36-47
Number of pages12
Publication statusPublished - 2019

Publication series

NameLECTURE NOTES IN COMPUTER SCIENCE

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos'. Together they form a unique fingerprint.

  • Cite this

    La Cascia, M., & Lo Presti, L. (2019). Deep Motion Model for Pedestrian Tracking in 360 Degrees Videos. In Image Analysis and Processing ICIAP 2019 - LNCS 11751 (pp. 36-47). (LECTURE NOTES IN COMPUTER SCIENCE).