Dense Trajectories and DHOG for Classification of Viewpoints from Echocardiogram VideosReport as inadecuate

Dense Trajectories and DHOG for Classification of Viewpoints from Echocardiogram Videos - Download this document for free, or read online. Document in PDF available to download.

Computational and Mathematical Methods in Medicine - Volume 2016 2016, Article ID 9610192, 7 pages -

Research ArticleSchool of Physics and Information Engineering, Fuzhou University, Fuzhou 350116, China

Received 15 October 2015; Revised 19 January 2016; Accepted 31 January 2016

Academic Editor: Syoji Kobashi

Copyright © 2016 Liqin Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


In echo-cardiac clinical computer-aided diagnosis, an important step is to automatically classify echocardiography videos from different angles and different regions. We propose a kind of echocardiography video classification algorithm based on the dense trajectory and difference histograms of oriented gradients DHOG. First, we use the dense grid method to describe feature characteristics in each frame of echocardiography sequence and then track these feature points by applying the dense optical flow. In order to overcome the influence of the rapid and irregular movement of echocardiography videos and get more robust tracking results, we also design a trajectory description algorithm which uses the derivative of the optical flow to obtain the motion trajectory information and associates the different characteristics e.g., the trajectory shape, DHOG, HOF, and MBH with embedded structural information of the spatiotemporal pyramid. To avoid -dimension disaster,- we apply Fisher’s vector to reduce the dimension of feature description followed by the SVM linear classifier to improve the final classification result. The average accuracy of echocardiography video classification is 77.12% for all eight viewpoints and 100% for three primary viewpoints.

Author: Liqin Huang, Xiangyu Zhang, and Wei Li



Related documents