Published 2019
| Version v1
Publication
Tunable Floating-Point for Artificial Neural Networks
Creators
Contributors
Others:
Description
Approximate computing has emerged as a promising approach to energy-efficient design of digital systems in many domains such as digital signal processing, robotics, and machine learning. Numerous studies report that employing different data formats in Deep Neural Networks (DNNs), the dominant Machine Learning approach, could allow substantial improvements in power efficiency considering an acceptable quality for results. In this work, the application of Tunable Floating-Point (TFP) precision to DNN is presented. In TFP different precisions for different operations can be set by selecting a specific number of bits for significand and exponent in the floating-point representation. Flexibility in tuning the precision of given layers of the neural network may result in a more power efficient computation. © 2018 IEEE.
Additional details
Identifiers
- URL
- http://hdl.handle.net/11567/983318
- URN
- urn:oai:iris.unige.it:11567/983318
Origin repository
- Origin repository
- UNIGE