Nearest Neighbor (NN) search is a crucial tool that remains critical in many challenging applications of computational geometry (e.g., surface reconstruction, clustering) and computer vision (e.g., image and information retrieval, classification, data mining). We present an effective Bregman ball tree [5] (Bb-tree) construction algorithm that...
-
March 2009 (v1)Conference paperUploaded on: December 4, 2022
-
June 28, 2009 (v1)Conference paper
Nearest Neighbor (NN) retrieval is a crucial tool of many computer vision tasks. Since the brute-force naive search is too time consuming for most applications, several tailored data structures have been proposed to improve the efficiency of NN search. Among these, vantage point tree (vp-tree) was introduced for information retrieval in metric...
Uploaded on: December 4, 2022 -
2010 (v1)Conference paper
The k-nearest neighbors (k-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However, k-NN still features some major drawbacks, which mainly reside in the uniform voting among the nearest prototypes in the feature space. In this paper, we propose a new method that is able to learn...
Uploaded on: December 3, 2022 -
April 12, 2010 (v1)Conference paper
Object classification is a challenging task in computer vision. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this paper, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with...
Uploaded on: December 3, 2022 -
March 2012 (v1)Journal article
Voting rules relying on k-nearest neighbors (k-NN) are an effective tool in countless many machine learning techniques. Thanks to its simplicity, k-NN classification is very attractive to practitioners, as it enables very good performances in several practical applications. However, it suffers from various drawbacks, like sensitivity to "noisy"...
Uploaded on: December 3, 2022 -
July 4, 2012 (v1)Journal article
The k-nearest neighbors (k-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good properties, the classic k-NN rule suffers from high variance when...
Uploaded on: December 3, 2022 -
September 24, 2012 (v1)Conference paper
It is an admitted fact that mainstream boosting algorithms like AdaBoost do not perform well to estimate class conditional probabilities. In this paper, we analyze, in the light of this problem, a recent algorithm, unn, which leverages nearest neighbors while minimizing a convex loss. Our contribution is threefold. First, we show that there...
Uploaded on: December 4, 2022