VideoMAP is an interactive video search engine which bridges users and large-scale video database, and aims to efficiently and accurately predict the user’s search intentions from vast scale video database. It integrates the state-of-the-art techniques developed at the Multimedia Computing Group, Institute of Computing Technology, Chinese Academy of Sciences. To ensure state-of-the-art competitiveness, VideoMAP participates in the 2009 VideOlympics showcase in CIVR and the 2009 TRECVID Interactive Search.
The following three demos will give you the details about VideoMAP and demonstrate three challenge problems:
1. Why is it named as VideoMAP? What is the role of MAP?
2. How to make use of concept detectors for video retrieval? (CONCEPT)
3. How to help users use VideoMAP easily? (USERS)
2.1 Why is it named as VideoMAP? What is the role of MAP?
As shown in Demo 1, VideoMAP visualizes the whole video database in a MAP as the background of recommended shots as shown in Figure 1, where each row is a video and each shot in the video is represented as a thumbnail of keyframe. Based on the community discovery result, these videos and shots are arranged so that the similar videos and shots are adjacent. Since that the MAP effectively visualize the whole video database and gives users a global view about it. That is why the system is named as VideoMAP. Furthermore, the MAP also plays a very important role in interactive search, as it implicates two feedback strategies.
Community Feedback: this algorithm is based on community mining in multi-modal correlation network, which discovers the tightly interrelated units of video shots with latent consistent semantics. The framework is illustrated in figure 1, and the corresponding publication will appear soon.
Figure 1. The framework of community feedback algorithm
Active Annotation: as we known, the adjacent shots in MAP are similar and very likely to have the same label. Since that, the distribution of labeled samples in MAP can guide users quickly locate the potential relevant samples region, actively find the potential relevant samples and annotate them as quick as possible, as illustrated in Figure 2. It is named Active Annotation.
Figure 2. The User Interface of Active Annotation
2.2 How to make use of concept detectors for video retrieval? (CONCEPT)
Using concept detectors for video retrieval has been known as one of the most interesting approaches in bridging the “semantic gap”. As shown in demo 2, VideoMAP provides two concept-based feedback strategies: Graph-Based Multi-Space Semantic Correlation Diffusion (GMSSCD) and Distribution-Based Concept Selection (DBCS). What’s more, aiming to make most of the concept detectors and users’ effort, VideoMAP also provides a concept tag interface and allows users to determine whether the recommended concepts are relevant or not. The interface of Concept Tag is shown in Figure 3.
Graph-based Feedback: Graph-based Feedback is based on Ranking on Manifolds propagation, which explores the relationship between query and concept for video retrieval. Compared with traditional CBVR approaches, the GMSSCD focuses on multi-space correlation diffusion by integrating the textual, visual and concept spaces into one uniform correlation graph, and the expansion result is more robust to noises. On the other hand, GMSSCD has a query-dependent mechanism, which can dynamically update the diffusion graph by feasibly embedding the query as some nodes, make GMSSCD very appropriate for the interactive case. Figure 3 is the framework of GMSSCD and the corresponding publication will appear soon.
Figure 3. The framework of GMSSCD feedback algorithm
DBCS Feedback: Distribution-Based Concept Selection (DBCS) is a method for query-to-concept mapping, which aims to select the most discriminative, rather than the most semantically or statistically relevant, concepts for video retrieval. The targeted concepts are those concepts whose distributions of detection score fluctuate widely between the relevant and irrelevant collections, but remain stable within both, as shown in figure 4. The details about DBCS can be found in our recent publications.
Figure 4. The main idea of DBCS algorithm
2.3 How to help users use VideoMAP easily? (USERS)
Just like most previous interactive systems, VideoMAP also provides some feedback strategies to deal with different kinds of query. However, for users with little background knowledge about the feedback strategies, they will be confused how to choose an appropriate feedback strategy in proper time point. As shown in demo 3, VideoMAP provides an automatic feedback strategy recommender, which is based on the current factors, such as: (1) the percentage of relevant samples in last pages; (2) the quality of the current strategy; (3) the percentage of relevant samples in all labeled samples; (4) the current feedback strategy; to provide the na?ve user a wise suggestion.
Prof. Jingtao Li, Prof. Yongdong Zhang, Dr. Juan Cao
Lei Bao, Banlan Feng, Lin Pang
Xiufeng Hua, Liang Ma
If you have any questions about this system, please contract Dr. Juan Cao (email@example.com). All advices and suggestions are welcome.
 J. Cao, Y.D. Zhang, B.L. Feng, X.F Hua, L. Bao, and X. Zhang, MCG-ICT-CAS TRECVID2008 search task report. Proceedings of TRECVID, 2008.
 J. Cao, Y.D. Zhang, J.B. Guo, L. Bao, and J.T. Li. VideoMap: An Interactive Video Retrieval System of MCG-ICT-CAS. ACM International Conference on Image and Video Retrieval (CIVR), Santorin, 2009.
 J. Cao, H.F. Jing, C.W. Ngo, and Y.D. Zhang. Distribution-based Concept Selection for Concept-based Video Retrieval. ACM International Conference on Multimedia (MM), Beijing, 2009.
B.L. Feng, J. Cao,X.G. Bao, L. Bao, Y.D. Zhang, S.X. Lin and X.C. Yun. Graph-based Multi-Space Semantic Correlation Propagation for Video Retrieval. The Visual Computer, International Journal of Computer Graphics, 2010.
Donec metus risus, viverra vel, vulputate eget, feugiat ac, magna. Nunc semper, nunc eu sollicitudin mollis, nunc orci sodales enim, vitae tincidunt neque mauris a nulla. Cras vel pede vel nisi consequat congue. Fusce id risus
Mr Smith - Company