Deep-learning-based forward-error-correction decoding techniques and optimizations for hardware implementation

In recent years, Deep-Learning has been adopted by a wide spectrum of applications, as it is a powerful problem-solving methodology which can be applied in extremely diverse fields. Various types of Artificial Neural Networks can be trained to perform a task with high accuracy. The effectiveness of...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριος συγγραφέας: Καββουσανός, Εμμανουήλ
Άλλοι συγγραφείς: Παλιουράς, Βασίλης
Μορφή: Thesis
Γλώσσα:English
Έκδοση: 2020
Θέματα:
Διαθέσιμο Online:http://hdl.handle.net/10889/13787
Περιγραφή
Περίληψη:In recent years, Deep-Learning has been adopted by a wide spectrum of applications, as it is a powerful problem-solving methodology which can be applied in extremely diverse fields. Various types of Artificial Neural Networks can be trained to perform a task with high accuracy. The effectiveness of such networks can surpass even humans in computer vision and language processing problems. Beyond the typical aforementioned applications, Deep-Learning techniques have been recently examined for adoption in several Telecommunication areas, including Forward Error Correction. Several works have investigated the training of neural networks for channel decoding. In this thesis, the case of the Syndrome-based Deep-Learning Decoder is considered for the BCH(63,45) code and transmission with BPSK modulation through an AWGN channel. First of all, the training process of the neural network decoder is examined, by searching for the optimal training hyperparameters. Furthermore, new neural network decoder architectures are explored, beyond those suggested in the literature and modifications to the existing decoding framework are suggested which improve the decoding performance remarkably. Moreover, the computational complexity of the Syndrome-based DL decoder is considered. Deep-Learning decoding methods are hard to implement in hardware as they normally require millions of operations for inference. In order for Deep-Learning decoding to be a competitive candidate for practical applications, further research effort is required to reduce the computational complexity and storage requirements of the Neural Networks involved. In this thesis, a structured flow is presented that significantly compresses a trained Syndrome-Based Neural Network Decoder by pruning up to 80% of the network weights and quantizing them to 8-bit fixed-point representation. The attained compressed Neural Network can then be used for inference, by designing special hardware or by using a generic Deep-Learning hardware accelerator that exploits the compressed structure of the network. Finally, the deployment of the DL Decoder in an embedded application is showcased, using the AI Edge platform by Xilinx. Implementation results are provided for the compressed DL Decoder, regarding latency, throughput rate and BER performance.