from http://github.com/ethz-asl/libnabo by Stéphane Magnenat (http://stephane.magnenat.net), ASL-ETHZ, Switzerland (http://www.asl.ethz.ch)
libnabo is a fast K Nearest Neighbour library for low-dimensional spaces. It provides a clean, legacy-free, scalar-type–agnostic API thanks to C++ templates. Its current CPU implementation is strongly inspired by ANN, but with more compact data types. On the average, libnabo is 5% to 20% faster than ANN.
libnabo depends on Eigen, a modern C++ matrix and linear-algebra library. libnabo works with either version 2 or 3 of Eigen. libnabo also depends on Boost, a C++ general library.
libnabo uses CMake as build system. The complete compilation process depends on the system you are using (Linux, Mac OS X or Windows). You will find a nice introductory tutorial in this you tube video: http://www.youtube.com/watch?v=CLvZTyji_Uw.
If your operating system does not provide it, you must get Eigen and Boost. Eigen only needs to be downloaded and extracted.
libnabo provides the following compilation options, available through CMake :
SHARED_LIBS
(boolean, default: false
): if true
, build a shared library, otherwise build a static libraryYou specify them with a command-line tool, ccmake
, or with a graphical tool, cmake-gui
. Please read the CMake documentation for more information.
Under Unix, assuming that Eigen and Boost are installed system-wide, you can compile (with optimisation and debug information) and install libnabo in /usr/local
with the following commands run in the top-level directory of libnabo's sources:
SRC_DIR=`pwd` BUILD_DIR=${SRC_DIR}/build mkdir -p ${BUILD_DIR} && cd ${BUILD_DIR} cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ${SRC_DIR} # if Eigen or Boost are not available system-wide, run at that point: # cmake-gui . # cmake-gui allows you to tell the location of Eigen or Boost make sudo make install
These lines will compile libnabo in a build
sub-directory and therefore keep your source tree clean. Note that you could compile libnabo anywhere you have write access, such as in /tmp/libnabo
. This out-of-source build is a nice feature of CMake. If Eigen or Boost are not installed system-wide, you might have to tell CMake where to find them (using ccmake
or cmake-gui
).
You can generate the documentation by typing:
make doc
libnabo is easy to use. For example, assuming that you are working with floats and that you have a point set M
and a query point q
, you can find the K
nearest neighbours of q
in M
:
// This example is in the public domain #include "nabo/nabo.h" int main() { using namespace Nabo; using namespace Eigen; // 100 points in 3D MatrixXf M = MatrixXf::Random(3, 100); // 1 query points VectorXf q = VectorXf::Random(3); // create a kd-tree for M, note that M must stay valid during the lifetime of the kd-tree NNSearchF* nns = NNSearchF::createKDTreeLinearHeap(M); // look for the 5 nearest neighbour of a the single-point query const int K = 5; VectorXi indices(K); VectorXf dists2(K); nns->knn(q, indices, dists2, K); // cleanup kd-tree delete nns; return 0; }
In this example, M
is an Eigen (refering to the software, not to the math) matrix (column major, float) and q
is an Eigen vector (float). The results indices
and dists2
are Eigen vectors of indices and squared distances refering to the columns of M
.
Here is a slightly more complex example:
// This example is in the public domain #include "nabo/nabo.h" #include <iostream> int main() { using namespace Nabo; using namespace Eigen; using namespace std; // 100 points in 3D MatrixXf M = MatrixXf::Random(3, 100); // 50 query points MatrixXf q = MatrixXf::Random(3, 50); // Create a kd-tree for M, note that M must stay valid during the lifetime of the kd-tree. NNSearchF* nns = NNSearchF::createKDTreeLinearHeap(M); // The output of the query are a matrix of indices to columns of M and // a matrix of squared distances corresponding to these indices. // These matrix must have the correct size when passed to the search function. MatrixXi indices; MatrixXf dists2; // Look for the 5 nearest neighbours of each query point, // We do not want approximations but we want to sort by the distance, indices.resize(5, q.cols()); dists2.resize(5, q.cols()); nns->knn(q, indices, dists2, 5, 0, NNSearchF::SORT_RESULTS); // Look for the 3 nearest neighbours of each query point, use the data themselves for the query. // We do not want approximations but we want to sort by the distance, // and we want to allow self-match. indices.resize(3, M.cols()); dists2.resize(3, M.cols()); nns->knn(M, indices, dists2, 3, 0, NNSearchF::SORT_RESULTS | NNSearchF::ALLOW_SELF_MATCH); // Make sure that the closest return points correspond to the points from M. for (int i = 0; i < 100; ++i) { // The query is the data itself and we allow self-match. if (indices.coeff(0, i) != i) cerr << "Oups, something went wrong: " << indices.coeff(0, i) << " " << i << endl; } // Now look for the 2 nearset neighbours of each query point. // We do allow 10% approximation but do not want to allow self-match. // We do not care about sorting either. indices.resize(2, M.cols()); dists2.resize(2, M.cols()); nns->knn(M, indices, dists2, 2, 0.1, 0); for (int i = 0; i < 50; ++i) { // The query is the data itself but we forbide self-match. if (indices.coeff(0, i) == i) cerr << "Oups, something went wrong: " << indices.coeff(0, i) << " " << i << endl; } // Cleanup the kd-tree. delete nns; return 0; }
Note that the matrix-based interface for query is more efficient than the vector-based one, because some sanity checks can be done only once. Therefore, if you have multiple points to query, we warmly suggest to pass them as a matrix instead of calling knn()
multiple times.
The following additional construction parameters are available in KDTREE_ algorithms:
bucketSize
(unsigned
): bucket size, defaults to 8The distribution of libnabo integrates a unit test module, based on CTest. Just type:
make test
...in the build directory to run the tests. Their outputs are available in the Testing
directory. These consist of validation and benchmarking tests. If ANN is detected when compiling libnabo, make
test
will also perform comparative benchmarks.
If you use libnabo in the academic context, please cite this paper that evaluates its performances in the contex of ICP:
@article{elsebergcomparison, title={Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration}, author={Elseberg, J. and Magnenat, S. and Siegwart, R. and N{\"u}chter, A.}, journal={Journal of Software Engineering for Robotics (JOSER)}, pages={2--12}, volume={3}, number={1}, year={2012}, issn={2035-3928} }
Please use github's issue tracker to report bugs.
libnabo is released under a permissive BSD license.
libnabo differs from ANN on the following points:
API
limitations
implementation
performances