******************************************************************************** Surrey Mobile: Dataset Description ******************************************************************************** The dataset consists of 8658 frames extracted from video sequences for evaluation of image search on mobile scenario. The sequences show a table containing several objects, including postcards, little boxes, papers, flowers, glasses, bottles, business cards and magazines. Three different devices have been used: GoPro, Nexus 7 and iPhone. Videos have been recorded in different times of the day, with natural and artificial light generating different shadow shapes and reflections. Videos are also subject to blurring and different zooms of the target object. The objects have been setup in order to create occlusions or transparency effects. Moreover, several features on different queries could match the same features on a given database image, producing false positive matches. For instance, letters of the alphabet are likely to produce corresponding patches with perfect match because of the geometric characteristics, even if the features did not belong to matching objects. The dataset can be downloaded from this page, see details below. The material given includes: - the images, divided into 3 folders depending on the device used, - the ground-truth to check if an object is actually in the frame (gt.txt), - image IDs (indexFiles_pgm), - an example of score file (score_example), - an example of matching file (match_example), - this readme file. ******************************************************************************** Evaluation Protocol ******************************************************************************** In order to assess the presence of a query object in a frame, for each frame and for each object a score can be computed by means of a custom matching algorithm. By thresholding the score values, a binary map can be built and compared to the ground-truth. The result is the computation of true positive ratio (TPR) and false positive ratio (FPR) metrics. Here is an example of run of the evaluation executable: ./eval score_example match_example File score_example contains an example of syntax for the score file. The user will provide a score file with the following syntax: [Frame ID 0] [score Query 0] [score Query 1] [score Query 2] ... [score Query 5] [Frame ID 1] [score Query 0] [score Query 1] [score Query 2] ... [score Query 5] ... Frame ID MUST BE an ascending integer number between 0 and 8657, while score values MUST BE double precision. Frame IDs and corresponding images are stored into indexFiles_pgm. File match_example contains a list of keypoints from two frames which are supposed to match according to the given matching algorithm: [Query ID a] [Frame ID b] [Query keypoint ID x] [Query keypoint ID y] Keypoint IDS are at the same locations as the patch dataset (.kpts extension). For each file the starting ID is always zero. The evaluation protocol uses information stored in gt.txt to compute ROC curves. The output files are called roc_q0, roc_q1, roc_q2, roc_q3, roc_q4, roc_q5 and have the following format: [Score threshold] [FPR] [TPR]