Digital Signal Processing Reference
In-Depth Information
20. L. Itti and P. Baldi, A principled approach to detecting surprising events in video, in Proc. IEEE
Conference on Computer Vision and Pattern Recognition, 2005, pp. 631-637.
21. L. Itti, G. Rees, and J. Tsotsos. Neurobiology of attention. San Diego: Elsevier, 2005
22. L. Itti, Crcns data sharing: Eye movements during free-viewing of natural videos, in Collabo-
rative Research in Computational Neuroscience Annual Meeting, 2008.
23. L. Itti and C. Koch. Feature combination strategies for saliency-based visual attention systems.
Journal of Electronic Imaging, 10(1), 161-169, 2001.
24. L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene
analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11,
pp. 1254-1259, 1998.
25. W. Jiang, S. F. Chang, and A. Loui, “Context-based concept fusion with boosted condi-
tional random Fields,” in Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, 2007,
pp. 949-952.
26. Shuqiang Jiang, Yonghong Tian, Qingming Huang, Tiejun Huang, Wen Gao. Content-Based
Video Semantic Analysis. Book Chapter in Semantic Mining Technologies for Multimedia
Databases (Edited by Tao, Xu, and Li), IGI Global, 2009.
27. Y. G. Jiang, J. Wang, S. F. Chang, C. W. Ngo, “Domain adaptive semantic diffusion for
large scale context-based video annotation,” in Proc. IEEE Int. Conf. Computer Vision, 2009,
pp. 1-8.
28. L. Kennedy, and S. F. Chang, “A reranking approach for context-based concept fusion in
video indexing and retrieval,” in Proc. IEEE Int. Conf. on Image and Video Retrieval, 2007,
pp. 333-340.
29. W. Kienzle, F. A.Wichmann, B. Scholkopf, and M. O. Franz, A nonparametric approach
to bottom-up visual saliency, in Advances in Neural Information Processing Systems, 2007,
pp. 689-696.
30. W. Kienzle, B. Scholkopf, F. A. Wichmann, and M. O. Franz, How to find interesting locations
in video: a spatiotemporal interest point detector learned from human eye movements, in 29th
DAGM Symposium, 2007, pp. 405-414.
31. M. Li, Y. T. Zheng, S. X. Lin, Y. D. Zhang, T.-S. Chua, Multimedia evidence fusion for video
concept detection via OWA operator, in Proc. Advances in Multimedia Modeling, pp. 208-216,
2009.
32. H. Liu, S. Jiang, Q. Huang, C. Xu, and W. Gao, Region-based visual attention analysis with its
application in image browsing on small displays, in ACM International Conference on Multi-
media, 2007, pp. 305-308.
33. T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum, Learning to detect a salient object, in
IEEE Conference on Computer Vision and Pattern Recognition, 2007.
34. T. Liu, N. Zheng, W. Ding, and Z. Yuan, Video attention: Learning to detect a salient object
sequence, in IEEE International Conference on Pattern Recognition, 2008.
35. Y. Liu, F. Wu, Y. Zhuang, J. Xiao, “Active post-refined multimodality video semantic concept
detection with tensor representation,” in Proc. ACM Multimedia, 2008, pp. 91-100.
36. K. H. Liu, M. F. Weng, C. Y. Tseng, Y. Y. Chuang, and M. S. Chen, “Association and tem-
poral rule mining for post-processing of semantic concept detection in video,” IEEE Trans.
Multimedia, 2008, pp. 240-251.
37. Y.-F. Ma, X.-S. Hua, L. Lu, and H.-J. Zhang, A generic framework of user attention model
and its application in video summarization, IEEE Transactions on Multimedia, vol. 7, no. 5,
pp. 907-919, 2005.
38. S. Marat, T. H. Phuoc, L. Granjon, N. Guyader, D. Pellerin, and A. Guerin-Dugue, Modelling
spatio-temporal saliency to predict gaze direction for short videos, International Journal of
Computer Vision, vol. 82, no. 3, pp. 231-243, 2009.
39. G. Miao, G. Zhu, S. Jiang, Q. Huang, C. Xu, and W. Gao, A Real-Time Score Detection and
Recognition Approach for Broadcast Basketball Video. In Proc. IEEE Int. Conf. Multimedia
and Expo, 2007, pp. 1691-1694.
40. F. Monay and D. Gatica-Perez, “Modeling semantic aspects for cross-media image indexing,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 10, pp. 1802-1917, Oct. 2007.
Search WWH ::




Custom Search