There have been spectacular breakthroughs in artificial intelligence (AI) in recent years, particularly in deep neural networks through developments like ChatGPT. But the millions of calculations that drive these technologies are very energy intensive. With the demand for AI-based solutions constantly on the rise, François Leduc-Primeau, research professor in electrical engineering at Polytechnique Montréal, is focused on making the technologies more energy efficient.

Deep neural networks find inspiration in the human brain and how it learns from examples. To solve a problem, they perform a myriad of calculations according to millions of learned parameters. And to retrieve those parameters, they dig deep in their memory through highly energy-intensive processes.

To lower the electricity bill, Leduc-Primeau explored the use of new types of electronic circuits to carry out the calculations directly where the network parameters are stored, just like the human brain. But this in-memory processing generates unsatisfactory results. The researcher and his team have therefore proposed new methods to train neural networks to operate accurately even when the calculations are imprecise.

The results are promising for several applications. Energy-efficient neural networks will make it possible to integrate AI into smaller devices (which require less energy to operate) through applications in robotics and medicine, for example. More broadly, the development of these specialized neural networks will help reduce the global energy consumption attributable to AI—a very good thing at a time when we must reconsider our collective environmental footprint.

References:

[1] Chitsaz, K., Mordido, G., David, J.-P., and Leduc-Primeau, F. (2023). “Training DNNs Resilient to Adversarial and Random Bit-flips by Learning Quantization Ranges”. Transactions on Machine Learning Research (TLMR).

[2] Kern, J., Henwood, S., Mordido, G., Dupraz, E., Aïssa-El-Bey, A., Savaria, Y., and Leduc-Primeau, F. (2024). “Fast and Accurate Output Error Estimation for Memristor-Based Deep Neural Networks”. IEEE Transactions on Signal Processing, vol. 72, 1205-1218.