< retour aux publications

High-Efficiency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression

Auteur(s) : A. Prost-Boucle, A. Bourge, F. Pétrot

Journal : ACM Transactions on Reconfigurable Technology and Systems (TRETS)

Volume : 31

Issue : 3

Pages : Article No. 15

Doi : 10.1145/3270764

Although performing inference with artificial neural networks (ANN) was until quite recently considered as essentially compute intensive, the emergence of deep neural networks coupled with the evolution of the integration technology transformed inference into a memory bound problem. This ascertainment being established, many works have lately focused on minimizing memory accesses, either by enforcing and exploiting sparsity on weights or by using few bits for representing activations and weights, to be able to use ANNs inference in embedded devices. In this work, we detail an architecture dedicated to inference using ternary {−1, 0, 1} weights and activations. This architecture is configurable at design time to provide throughput vs. power trade-offs to choose from. It is also generic in the sense that it uses information drawn for the target technologies (memory geometries and cost, number of available cuts, etc.) to adapt at best to the FPGA resources. This all ows to achieve up to 5.2k frames per second per Watt for classification on a VC709 board using approximately half of the resources of the FPGA.