In this paper we show how to map a LSSVM on digital hardware. In particular, we provide a theoretical analysis of quantization effects, due to finite register lengths, that leads to some useful bounds for computing the necessary number of bits for a correct hardware implementation. Then, we describe a new FPGA-based architecture, the KTRON,...
-
2004 (v1)PublicationUploaded on: March 31, 2023
-
2003 (v1)Publication
In the last few years several kinds of recurrent neural networks (RNNs) have been proposed for solving linear and nonlinear optimization problems. In this paper, we provide a survey of RNNs that can be used to solve both the constrained quadratic optimization problem related to support vector machine (SVM) learning, and the SVM model selection...
Uploaded on: March 25, 2023 -
2001 (v1)Publication
Support Vector Machines are gaining more and more acceptance thanks to their success in many real-world problems. We propose in this work a solution for implementing SVM in hardware. The main idea is to use a recurrent network for SVM learning that guarantees the globally convergence to the optimal solution without the use of penalty terms....
Uploaded on: March 27, 2023 -
2003 (v1)Publication
No description
Uploaded on: March 25, 2023 -
2000 (v1)Publication
The well-known bounds on the generalizationability of learning machines, based on the Vapnik–Chernovenkis (VC) dimension,are very loose when applied to Support Vector Machines (SVMs).In this work we evaluate the validity of the assumption that these bounds are,nevertheless, good indicators of the generalization ability of SVMs.We show that this...
Uploaded on: March 27, 2023 -
1999 (v1)Publication
In this paper we review some basic concepts of the theory of Support Vedor Machines and derive some result of pradical interest on their generalization ability. We compare the effectiveness and efficiency in solving some well-known pattern recognition problems through the use of different kernel fundions.
Uploaded on: April 14, 2023 -
2000 (v1)Publication
We model here a distributed implementation of cross-stopping, a combination of cross-validation and early-stopping techniques, for the selection of the optimal architecture of feed-forward networks. Due to the very large computational demand of the method, we use the RAIN system (Redundant Array of Inexpensive workstations for Neurocomputing)...
Uploaded on: April 14, 2023 -
1999 (v1)Publication
We propose here a VLSI friendly algorithm for the implementation of the learning phase of Support Vector Machines (SVMs). Differently from previous methods, that rely on sophisticated constrained nonlinear programming algorithms, our approach finds a simple updating rule that can be easily implemented in digital VLSI.
Uploaded on: April 14, 2023 -
2000 (v1)Publication
In this paper we propose some very simple algorithms and architectures for a digital VLSI implementation of Support Vector Machines. We discuss the main aspects concerning the realization of the learning phase of SVMs, with special attention on the effects of fixed-point math for computing and storing the paraneters of the network. Soime...
Uploaded on: December 5, 2022 -
2000 (v1)Publication
We propose here a fast way to perform the gradient computation in Support Vector Machine (SVM) learning, when samples are positioned on a m-dimensional grid. Our method takes advantage of the particular structure of the constrained quadratic programming problem arising in this case. We show how such structure is connected to the properties of...
Uploaded on: March 27, 2023 -
2002 (v1)Publication
We apply an automatic tuning method for the hyperparameters of a SVM classifier. The data used to train and test the algorithm come from industrial measurements made on the products after being built up. Various training sessions have been carried on in different learning environments and the results have been validated through a bootstrap...
Uploaded on: April 14, 2023