Compression and Optimization of Neural Networks by Ternarization Methods

Authors

  • A.C. Tanasa Transilvania University of Brasov, Romania
  • D. Tanasa Transilvania University of Brasov, Romania

DOI:

https://doi.org/10.31926/but.ens.2020.13.62.1.10

Keywords:

neural networks, ternarization, threshold, sparsity

Abstract

Current deep neural networks are achieving higher and higher performances, but this comes to a great computational cost which creates a difficulty to use them on embedded platforms like smartphones, tablets, autonomous cars. Fortunately, this can be addressed using quantization techniques, one of which is the internalization when only 3 bits are used.

Author Biographies

A.C. Tanasa, Transilvania University of Brasov, Romania

Dept. of Electronics and Computers

D. Tanasa, Transilvania University of Brasov, Romania

Dept. of Electronics and Computers

Downloads

Published

2021-01-26

Issue

Section

ELECTRICAL ENGINEERING, ELECTRONICS AND AUTOMATICS