MCG-RGBD:A Benchmark RGB-D based Human Orientation Estimation
Introduction of MCG-RGBD
The goal of MCG-RGBD is constructing a general database for research in RGB-D based Human Orientation Estimation.
The dataset is captured by RGB-D sensors (also known as Kinect). It consists of 10 RGB-D video surveillance sequences, captured at three different scenes including meeting room, corridor and entrance, with 4000 frames and 11 different persons. Some examples can be seen in Fig.1. The sequences are stored as ONI file, you can open and edit them with the OPENNI software. In order to imitate the real-world scenarios better, wide diversity of poses are included in the dataset, such as standing, squatting, jumping, walking, running, rotating, waving hands, hug and so on.
The dataset are marked by poser8, a 3D CGI software , which is similar to the labeling method in . Just like other human body orientation datasets in 2D or 3D , , we only focus on estimating the angle around the axis perpendicular to the ground plane. Each frame is annotated by 2~3 people and the final annotation uses the average value. In the end we have 2700 people annotations and 5400 examples adding their reflections. The distribution of the dataset is shown in Fig.2.
Fig.1. Examples of Human Body Orientation. For purpose of convenient analysis, the body orientation space is divided into 8 non-overlapping partitions (S, SW, SW, W, E, NW, NE, and N). The RGB-D sensor locates at people’s south.
Fig.2. The distribution of our human body orientation dataset. The red number is the actual number of exmples in each class.
For the raw video and ground-truth, please email your full name and affiliation to
Wu Liu (email@example.com), once your information has been verified, we will send you the zipped files. We ask for your information only to make sure the dataset is used for non-commercial research purposes, we will not give it to any third parties or publish it publicly anywhere.
If you have any questions about the
MCG-RGB dataset, please contract Liu Wu
(firstname.lastname@example.org). We are continuously striving to make
the dataset better and better and greatly appreciate any
comments and suggestions.
On this page, you will find some supplementary material for our papers related to the estimation of the human body orientation from RGB-D camera.
Estimating Human Body Orientation from RGB-D Sensors
You can watch several videos showing the results of our method.
VIDEO1 shows the presentation of our approach working on single people scenario, where the girl runs around. The body orientation is marked by arrows in ellipse. The results evince that our approach could accurately estimate the body orientation in 360 degree scope.
VIDEO2 shows the presentation on multi-people scenario, where two people hug together and then seperate. We can see that the estimation results are accurate when people are very close, which indicate that our approach is robust to the interference of multi-people.
 S. Maji, L. Bourdev, J. Malik, "Action Recognition from a Distributed Representation of Pose and Appearance", CVPR 2011
 C. Chen and J. Odobez, “We are not contortionists: Coupled adaptive learning for head and body orientation estimation in surveillance video”, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 1544 –1551.
 M. Andriluka, S. Roth, and B. Schiele, “Monocular 3d pose estimation and tracking by detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 623–630.