Try your own

It is possible to test your own detector proposal on the benchmark datasets hosted on this website and compare its performance with the provided results. To do this, simply follow the following instructions.

1Download the evaluator

  • Windows installer (unzip and install) [LINK]
  • Linux tar file (just unzip) [LINK]
2- Download the dataset
Download the dataset(s) of interest from the dataset section and unzip them in the “data” subfolder of the evaluator

 

3- Prepare the input files for the evaluator

For each cloud (both scenes and models) included in the dataset that you want to test, you need to prepare one file with the keypoints extracted by your algorithm. The filename format is as follows:

YourDetectorName_cloudFileName.feat

where

  • YourDetectorName: a unique ID associated to your detector
  • cloudFileName: the filename of the cloud currently being processed (without file extension)

For example, if your detector ID is “SuperExtractor” and you are testing on file “Scene0View0_0.1.ply” (a scene from the “Random Views” dataset), then the final filename will be: SuperExtractor_Scene0View0_0.1.feat

NOTE: if you are testing a “fixed scale” detector and want to run the evaluation on a range of radius values, so to obtain results plotted as those in the Results section, then you have to provide filenames discriminating between different radius values. To do this, modify the filename as follows:
YourDetectorName_r%N%_cloudFileName.feat
where N is the value of the radius in units of mesh resolution.

 

Each filename must contain a list of the extracted keypoints as follows:

N
x1 y1 z1 scale1 index1 score1
x2 y2 z2 scale2 index2 score2
x3 y3 z3 scale3 index3 score3

…. 

where

  • N is the number of extracted keypoints
  • x1, y1, z1 are the 3 coordinates (floating point) of the first keypoint extracted
  • scale1 is the metric characteristic scale (floating point) of the first keypoint extracted
  • index1 is the point cloud index (integer) of the first keypoint extracted
  • score1 is the saliency value (floating point) of the first keypoint extracted
  • and so on for all remaning keypoints..

For those proposals that do not compute a characteristic scale (i.e. “fixed-scale”) the scale value must be present but will not be considered.

OPTIONAL: the “features/DATASETNAME” subfolder is the default folder for the evaluator to get input files; if you save the keypoint array files there, there will be less command line options to be set for the evaluator.

4- Run the evaluator 

The evaluator comes with a bat/sh file for each dataset. To compute the evaluation metrics for a specific dataset, run the script with the name of your detector (as reported in your input files) as argument. If you didn’t save the files in the default directory as previously specified, you need to modify the path in the script after “-S” with the path to the input files.

Please note that if you are testing a “fixed scale” detector and want the evaluation to be performed on a range of radius values, each input filename must contain the specific radius value (see point 3), and the script file must include an outer “for” loop over the radius range (see commented lines in the bat files). Conversely, if you are only testing for one specific radius value, you can leave the script as is.

5-  Create the charts

The evaluator outputs csv files with numerical evaluation metrics related to the performance of your detector on the tested dataset(s). Specifically, each line of the csv is related to one different model-scene test. If you want to plot the charts similarly to the way we did in our publications, just run the python script createCharts.py provided in the charts folder.  (Note: this requires Python 2.6 or above and Gnuplot).

 

6- We are in the process of preparing benchmark tables evaluating new proposal for 3D keypoint detection. If you are interested, you can email us with your results and we will provide to insert your proposal on our website.