Overview ·
Example ·
Protocol ·
Licence ·
Download ·
Contacts ·
Related Datasets ·
Acknowledgement
Overview
We collected a video dataset, termed ChokePoint,
designed for experiments in person
identification/verification under real-world
surveillance conditions using existing technologies.
An array of three cameras was placed above several
portals (natural choke points in terms of pedestrian
traffic) to capture subjects walking through each
portal in a natural way (see example).
While
a person is walking through a portal, a sequence of
face images (ie. a face set) can be captured. Faces in
such sets will have variations in terms of
illumination conditions, pose, sharpness, as well as
misalignment due to automatic face
localisation/detection.
Due to the three camera configuration, one of the
cameras is likely to capture a face set where a subset
of the faces is near-frontal.
The dataset consists of 25 subjects (19 male and 6
female) in portal 1 and 29 subjects (23 male and 6
female) in portal 2.
The recording of portal 1 and portal 2 are one month
apart.
The dataset has frame rate of 30 fps and the image
resolution is 800X600 pixels.
In total, the dataset consists of 48 video sequences
and 64,204 face images. In all sequences,
only one subject is presented in the image at a time.
The first 100 frames of each sequence are for
background modelling where no foreground objects were
presented.
Each sequence was named according to the recording
conditions (eg. P2E_S1_C3) where P, S, and C stand for
portal, sequence and camera, respectively.
E and L indicate subjects either entering or leaving
the portal. The numbers indicate the respective
portal, sequence and camera label. For example,
P2L_S1_C3 indicates that the recording was done in
Portal 2, with people leaving the portal, and captured
by camera 3 in the first recorded sequence.
To pose a more challenging real-world surveillance
problems,
two seqeunces (P2E_S5 and P2L_S5) were recorded with
crowded scenario. In additional to the aforementioned
variations, the sequences were presented with
continuous occlusion.
This phenomenon presents challenges in identidy
tracking and face verification.
This dataset can be applied, but not limited, to the
following research areas:
- person re-identification
- image set matching
- face quality measurement
- face clustering
- 3D face reconstruction
- pedestrian/face tracking
- background estimation and substraction
Example
An example of the recording setup used for the
ChokePoint dataset. A camera rig contains 3 cameras
placed just above a door, used for simultaneously
recording the entry of a person from 3 viewpoints. The
variations between viewpoints allow for variations in
walking directions, facilitating the capture of a
near-frontal face by one of the cameras.
|
Camera Rig
|
s
Camera 1
|
Camera 2
|
Camera 3
|
|
Example shots from the ChokePoint dataset, showing
portals with various backgrounds.
P1E - Camera 1
|
P1L - Camera 1
|
P2E - Camera 2
|
P2L - Camera 2
|
|
P2E_S5 - Camera 2
|
P2L_S5 - Camera 2
|
|
Protocol
We designed a baseline verification protocol
(protocol_baseline) for this dataset.
In this protocol, video sequences are divided into two
groups (G1 and G2),
where each group played the role of development set
and evaluation set in turn. In each group, all
possible genuine and imposter pairs were generated.
Parameters and threshold are first learned on the
development set and then applied on the evaluation
set. The average verification rate is used for
reporting results.
G1 |
P1E_S1_C1 |
|
P1E_S2_C2 |
|
P2E_S2_C2 |
|
P2E_S1_C3 |
|
P1L_S1_C1 |
|
P1L_S2_C2 |
|
P2L_S2_C2 |
|
P2L_S1_C1 |
|
|
G2 |
P1E_S3_C3 |
|
P1E_S4_C1 |
|
P2E_S4_C2 |
|
P2E_S3_C1 |
|
P1L_S3_C3 |
|
P1L_S4_C1 |
|
P2L_S4_C2 |
|
P2L_S3_C3 |
|
|
To study the effect of person verification under
different environments and time interval between
recording, following case studies can be considered:
- case_study_1
- indoor scene only
- short time interval
G1 |
P1E_S1_C1 |
|
P1E_S2_C2 |
|
P1L_S1_C1 |
|
P1L_S2_C2 |
|
|
G2 |
P1E_S3_C3 |
|
P1E_S4_C1 |
|
P1L_S3_C3 |
|
P1L_S4_C1 |
|
|
- case_study_2
- indoor and outdoor scene
- short time interval
G1 |
P2E_S2_C2 |
|
P2E_S1_C3 |
|
P2L_S2_C2 |
|
P2L_S1_C1 |
|
|
G2 |
P2E_S4_C2 |
|
P2E_S3_C1 |
|
P2L_S4_C2 |
|
P2L_S3_C3 |
|
|
- case_study_3
- indoor and outdoor scene
- long time interval
- genuine and imposter pairs shall be generated
by matching each sequence of set 1 with set 2.
|
Set 1 |
Set 2 |
G1 |
P1E_S1_C1 P1E_S2_C2 P1L_S1_C1 P1L_S2_C2
|
P2E_S2_C2 P2E_S1_C3 P2L_S2_C2 P2L_S1_C1
|
G2 |
P1E_S3_C3 P1E_S4_C1 P1L_S3_C3 P1L_S4_C1
|
P2E_S4_C2 P2E_S3_C1 P2L_S4_C2 P2L_S3_C3
|
We also encourage the experiments to be conducted with
two evaluation conditions:
- single camera (SC)
- using faces from the camera with most frontal view
(listed in above tables).
- multiple camera (MC)
- using faces from all three cameras. For example,
images for portal 1E and sequence 1 will are taken
from P1E_S1_C1, P1E_S1_C2 and P1E_S1_C3.
Licence
This dataset ('Licensed Material') is made available
to the scientific community for non-commercial
research purposes such as academic research, teaching,
scientific publications or personal experimentation.
Permission is granted by National ICT Australia
Limited (NICTA) to you (the 'Licensee') to use, copy
and distribute the Licensed Material in accordance
with the following terms and conditions:
- Licensee must include a reference to NICTA and
the following publication in any published work that
makes use of the Licensed Material:
@INPROCEEDINGS{wong_cvprw_2011,
AUTHOR = {Yongkang Wong and Shaokang Chen and Sandra Mau and Conrad Sanderson and Brian C. Lovell},
TITLE = {Patch-based Probabilistic Image Quality Assessment for Face Selection and Improved Video-based Face Recognition},
BOOKTITLE = {IEEE Biometrics Workshop, Computer Vision and Pattern Recognition (CVPR) Workshops},
YEAR = {2011},
pages = {81-88},
month = {June},
publisher = {IEEE}
}
- If Licensee alters the content of the Licensed
Material or creates any derivative work, Licensee
must include in the altered Licensed Material or
derivative work prominent notices to ensure that any
recipients know that they are not receiving the
original Licensed Material.
- Licensee may not use or distribute the Licensed
Material or any derivative work for commercial
purposes including but not limited to, licensing or
selling the Licensed Material or using the Licensed
Material for commercial gain.
- The Licensed Material is provided 'AS IS',
without any express or implied warranties. NICTA
does not accept any responsibility for errors or
omissions in the Licensed Material.
- This original license notice must be retained in
all copies or derivatives of the Licensed Material.
- All rights not expressly granted to the Licensee
are reserved by NICTA.
Download
Notes
- The ChokePoint dataset taking up about 12 Gb.
Each tar.xz file is on average around 200 Mb
- Brief description of the data can be found in README
- The cropped face images were extracted using the
manually labelled eye location. The faces has the
size of 96x96 pixels
- Please download only one file at a time
-- this is so the server is not overloaded
- Microsoft Windows user can extract the *.tar.xz
files with 7-Zip
Original files:
- groundtruth.tar.xz
- P1E_S1.tar.xz
- P1E_S2.tar.xz
- P1E_S3.tar.xz
- P1E_S4.tar.xz
- P1L_S1.tar.xz
- P1L_S2.tar.xz
- P1L_S3.tar.xz
- P1L_S4.tar.xz
- P2E_S1.tar.xz
- P2E_S2.tar.xz
- P2E_S3.tar.xz
- P2E_S4.tar.xz
- P2E_S5.tar.xz
- P2L_S1.tar.xz
- P2L_S2.tar.xz
- P2L_S3.tar.xz
- P2L_S4.tar.xz
- P2L_S5.tar.xz
Cropped face images:
- P1E.tar.xz
- P1L.tar.xz
- P2E.tar.xz
- P2L.tar.xz
Contacts
If you have any questions regarding to the dataset,
please contact:
yongkang döt wong ät ieee döt orgxyz
Related Datasets
- VB100
Bird Dataset - surveillance videos of 100 bird
species, for experiments in fine-grained
classification
- VidTIMIT
Dataset - audio-visual recordings of people
reciting phonetically balanced sentences
Acknowledgement
|
|
The chokepoint
dataset is sponsored by NICTA. NICTA is funded
by the Australian Government as represented by
the Department of Broadband, Communications
and the Digital Economy, as well as the
Australian Research Council through the ICT
Centre of Excellence program.
|
|