Published 1996 | Version v1
Publication

Mixing floating- and fixed-point formats for neural network learning on neuroprocessors

Description

We examine the efficient implementation of back-propagation (BP) type algorithms on T0, a vector processor with a fixed-point engine, designed for neural network simulation. Using Matrix Back Propagation (MBP) we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line BP algorithm. We use a mixture of fixed- and floating-point operations in order to guarantee both high efficiency and fast convergence. Though the most expensive computations are implemented in fixed-point, we achieve a rate of convergence that is comparable to the floating-point version. The time taken for conversion between fixed- and floating-point is also shown to be reasonably low.

Additional details

Created:
October 11, 2023
Modified:
November 28, 2023