The k-nearest neighbors (k-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However, k-NN still features some major drawbacks, which mainly reside in the uniform voting among the nearest prototypes in the feature space. In this paper, we propose a new method that is able to learn...
-
2010 (v1)Conference paperUploaded on: December 3, 2022
-
April 18, 2013 (v1)Report
A standard approach for large scale image classification involves high dimensional features and Stochastic Gradient Descent algorithm (SGD) for the minimization of classical Hinge Loss in the primal space. Although complexity of Stochastic Gradient Descent is linear with the number of samples these method suffers from slow convergence. In order...
Uploaded on: December 2, 2022 -
April 12, 2010 (v1)Conference paper
Object classification is a challenging task in computer vision. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this paper, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with...
Uploaded on: December 3, 2022 -
March 2012 (v1)Journal article
Voting rules relying on k-nearest neighbors (k-NN) are an effective tool in countless many machine learning techniques. Thanks to its simplicity, k-NN classification is very attractive to practitioners, as it enables very good performances in several practical applications. However, it suffers from various drawbacks, like sensitivity to "noisy"...
Uploaded on: December 3, 2022 -
July 4, 2012 (v1)Journal article
The k-nearest neighbors (k-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good properties, the classic k-NN rule suffers from high variance when...
Uploaded on: December 3, 2022 -
September 22, 2013 (v1)Conference paper
Recent works display that large scale image classification problems rule out computationally demanding methods. On such problems, simple approaches like k-NN are affordable contenders, with still room space for statistical improvements under the algorithmic constraints. A recent work showed how to leverage k-NN to yield a formal boosting...
Uploaded on: December 3, 2022 -
September 22, 2013 (v1)Conference paper
Recent works display that large scale image classification problems rule out computationally demanding methods. On such problems, simple approaches like k-NN are affordable contenders, with still room space for statistical improvements under the algorithmic constraints. A recent work showed how to leverage k-NN to yield a formal boosting...
Uploaded on: October 11, 2023 -
September 24, 2012 (v1)Conference paper
It is an admitted fact that mainstream boosting algorithms like AdaBoost do not perform well to estimate class conditional probabilities. In this paper, we analyze, in the light of this problem, a recent algorithm, unn, which leverages nearest neighbors while minimizing a convex loss. Our contribution is threefold. First, we show that there...
Uploaded on: December 4, 2022 -
February 24, 2014 (v1)Journal article
Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and...
Uploaded on: December 2, 2022 -
February 24, 2014 (v1)Journal article
Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, UNN, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the UNN algorithm, at the possible expense of the algorithm's convergence and...
Uploaded on: October 11, 2023 -
October 1, 2012 (v1)Conference paper
Universal Nearest Neighbours (unn) is a classifier recently proposed, which can also effectively estimates the posterior probability of each classification act. This algorithm, intrinsically binary, requires the use of a decomposition method to cope with multiclass problems, thus reducing their complexity in less complex binary subtasks. Then,...
Uploaded on: October 11, 2023 -
November 11, 2012 (v1)Conference paper
This paper proposes a novel automated approach for the categorization of cells in fluorescence microscopy images. Our supervised classification method aims at recognizing patterns of unlabeled cells based on an annotated dataset. First, the cell images need to be indexed by encoding them in a feature space. For this purpose, we propose tailored...
Uploaded on: December 3, 2022 -
October 1, 2012 (v1)Conference paper
Universal Nearest Neighbours (unn) is a classifier recently proposed, which can also effectively estimates the posterior probability of each classification act. This algorithm, intrinsically binary, requires the use of a decomposition method to cope with multiclass problems, thus reducing their complexity in less complex binary subtasks. Then,...
Uploaded on: December 2, 2022 -
November 11, 2012 (v1)Conference paper
This paper proposes a novel automated approach for the categorization of cells in fluorescence microscopy images. Our supervised classification method aims at recognizing patterns of unlabeled cells based on an annotated dataset. First, the cell images need to be indexed by encoding them in a feature space. For this purpose, we propose tailored...
Uploaded on: October 11, 2023