Digital hardware implementation of large neural networks under resource and memoory bandwidth constraints using network folding
- Share
- Share on Facebook
- Share on X
- Share on LinkedIn
SLS
Keywords: Digital hardware implementation, large neural networks, network folding
Abstract: The objective of this thesis is to design and implement optimized hardware architectures for large neural networks while meeting strict constraints of hardware resources and memory bandwidth. With the rapid growth of artificial intelligence (AI) applications, deep neural networks (DNNs) are increasingly deployed on embedded devices (IoT, mobile, FPGA, ASIC) where computational resources and energy consumption are limited.
Your project specifically aims to exploit a 'network folding' technique to minimize memory footprint and bandwidth consumption while maintaining network performance. This consists of optimizing the network structure by reducing redundancy in layers and by combining or 'folding' certain operations and layers to limit accesses to external memory, which is often the bottleneck in embedded devices.
Informations
Thesis director: Frédéric PÉTROT (TIMA - SLS)
Thesis supervisors:
Adrien PROST-BOUCLE (TIMA - SD)
Olivier MULLER (TIMA - SLS)
Thesis started on: 01/10/2024
Doctoral school: MSTII
- Share
- Share on Facebook
- Share on X
- Share on LinkedIn