ENGLISH VERSION 首 页 成 果 论 文 人 员 招 生
 
  数  据  
 
 
 

MCG-RGBD:A Benchmark RGB-D based Human Orientation Estimation Database

 

 

1. Introduction of MCG-RGBD

The goal of MCG-RGBD is constructing a general database for research in RGB-D based Human Orientation Estimation. The dataset is captured by RGB-D sensors (also known as Kinect). It consists of 10 RGB-D video surveillance sequences, captured at three different scenes including meeting room, corridor and entrance, with 4000 frames and 11 different persons. Some examples can be seen in Fig.1. The sequences are stored as ONI file, you can open and edit them with the OPENNI software[1]. In order to imitate the real-world scenarios better, wide diversity of poses are included in the dataset, such as standing, squatting, jumping, walking, running, rotating, waving hands, hug and so on.

The dataset are marked by poser8, a 3D CGI software [2], which is similar to the labeling method in [3]. Just like other human body orientation datasets in 2D or 3D [4], [5], we only focus on estimating the angle around the axis perpendicular to the ground plane. Each frame is annotated by 2~3 people and the final annotation uses the average value. In the end we have 2700 people annotations and 5400 examples adding their reflections. The distribution of the dataset is shown in Fig.2.

exampledistribution
Fig.1. Examples of Human Body Orientation. For purpose of convenient analysis, the body orientation space is divided into 8 non-overlapping partitions (S, SW, SW, W, E, NW, NE, and N). The RGB-D sensor locates at people’s south.  
Fig.2. The distribution of our human body orientation dataset. The red number is the actual number of exmples in each class.

2.Downloads

For the raw video and ground-truth, please email your full name and affiliation to Wu Liu (liuwu@ict.ac.cn), once your information has been verified, we will send you the zipped files. We ask for your information only to make sure the dataset is used for non-commercial research purposes, we will not give it to any third parties or publish it publicly anywhere.

3. Feedback

If you have any questions about the MCG-RGB dataset, please contract Liu Wu (liuwu@ict.ac.cn). We are continuously striving to make the dataset better and better and greatly appreciate any comments and suggestions.

4. Supplementary material

On this page, you will find some supplementary material for our papers related to the estimation of the human body orientation from RGB-D camera.

Estimating Human Body Orientation from RGB-D Sensors

You can watch several videos showing the results of our method.

VIDEO1 shows the presentation of our approach working on single people scenario, where the girl runs around. The body orientation is marked by arrows in ellipse. The results evince that our approach could accurately estimate the body orientation in 360 degree scope.

此页面上的内容需要较新版本的 Adobe Flash Player。

获取 Adobe Flash Player

VIDEO2 shows the presentation on multi-people scenario, where two people hug together and then seperate. We can see that the estimation results are accurate when people are very close, which indicate that our approach is robust to the interference of multi-people.

此页面上的内容需要较新版本的 Adobe Flash Player。

获取 Adobe Flash Player

5. References

[1] OPENNI http://openni.org/

[2] Poser. Available: http://poser.smithmicro.com/

[3] S. Maji, L. Bourdev, J. Malik, "Action Recognition from a Distributed Representation of Pose and Appearance", CVPR 2011

[4] C. Chen and J. Odobez, “We are not contortionists: Coupled adaptive learning for head and body orientation estimation in surveillance video”, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 1544 –1551.

[5] M. Andriluka, S. Roth, and B. Schiele, “Monocular 3d pose estimation and tracking by detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 623–630.


   
版权所有 中国科学院计算技术研究所 多媒体计算课题组
TEL: 10-62600616 FAX: 10-62611846 Email: txia@ict.ac.cn