Thesis Details
Exploiting Approximate Arithmetic Circuits in Neural Networks Inference
This thesis is concerned with the utilization of approximate circuits in neural networks to provide energy savings. Various studies showing interesting results already exist, but most of them were very application specific or demonstrated on a small scale. To take this further, we created a platform by nontrivial modifications of robust open-source framework Tensorflow allowing us to simulate approximate computing on known state-of-the-art neural networks e.g. Inception or MobileNet. We focused only on replacement of most computationally expensive parts of convolutional neural networks, which are multiplication operations in convolution layers. We experimentally demonstrated and compared various setups and even that we proceeded without relearning, we were able to obtain promising results. For example, with zero accuracy loss on Inception v4 architecture, we gained almost 8% energy savings which could be valuable, especially in low-power devices or in large neural networks with enormous computational demands.
artificial intelligence, neural networks, approximate circuits, quantization, energy savings, TensorFlow, Inception, MobileNet
Bařina David, Ing., Ph.D. (DCGM FIT BUT), člen
Češka Milan, doc. RNDr., Ph.D. (DITS FIT BUT), člen
Chudý Peter, doc. Ing., Ph.D. MBA (DCGM FIT BUT), člen
Polášek Ivan, doc. Ing., Ph.D. (FIIT STU), člen
Zendulka Jaroslav, doc. Ing., CSc. (DIFS FIT BUT), člen
@mastersthesis{FITMT22230, author = "Tom\'{a}\v{s} Matula", type = "Master's thesis", title = "Exploiting Approximate Arithmetic Circuits in Neural Networks Inference", school = "Brno University of Technology, Faculty of Information Technology", year = 2019, location = "Brno, CZ", language = "english", url = "https://www.fit.vut.cz/study/thesis/22230/" }