Mixing floating- and fixed-point formats for neural network learning on neuroprocessors
- Creators
- ANGUITA, DAVIDE
- B. Gomes
- Others:
- Anguita, Davide
- B., Gomes
Description
We examine the efficient implementation of back-propagation (BP) type algorithms on T0, a vector processor with a fixed-point engine, designed for neural network simulation. Using Matrix Back Propagation (MBP) we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line BP algorithm. We use a mixture of fixed- and floating-point operations in order to guarantee both high efficiency and fast convergence. Though the most expensive computations are implemented in fixed-point, we achieve a rate of convergence that is comparable to the floating-point version. The time taken for conversion between fixed- and floating-point is also shown to be reasonably low.
Additional details
- URL
- http://hdl.handle.net/11567/315070
- URN
- urn:oai:iris.unige.it:11567/315070
- Origin repository
- UNIGE