Περίληψη: | This thesis begins with the most important historical milestones for Neural Networks. This research area has a long history, which is usually set to have begun with McCulloch and Pitts and their neuron model. Other important historical milestones include the work of Donald Hebb from which Hebbian Learning originated, as well as the research of Frank Rosenblatt on Perceptrons.
Afterwards, this work introduces Spiking Neural Networks. More specifically, among the different kinds of neural networks lies the Spiking Neural Network, which is considered as the 3rd generation right after Perceptrons and networks of generalized McCulloch and Pitts neurons employing an activation function with a continuous set of possible output values. Spiking neurons compute in a fundamentally different way than their "predecessors", basing their operation in neurophysiological principles.
Next, the thesis continues by providing the necessary theoretical foundation for the understanding and construction of Spiking Neural Networks. Specifically, in order to build these structures, we first have to study the function of the basic elements of biological neural systems, namely the biological neurons and synapses. The neurons are the fundamental building blocks of neural systems and their information processing units while the synapses are the means of communication between the neurons. Important concepts that we come across in studying neurons and synapses include action potentials, spike trains, membrane potential and firing rates.
Utilizing the aforementioned knowledge, this work moves on to building neuron models, each one having a varying degree of biological faithfulness, and different use capabilities. These are the models that can be used as the basic building blocks in the construction of larger neural structures. Three important examples of such models, and specifically the ones presented in this work, are the Spike Response Model, the Leaky Integrate-And-Fire Model and the Hodgkin-Huxley Model.
After describing and building the necessary models to explain the neural functions we are interested in, we now have in our toolbox all we need to carry on with the building Spiking Neural Networks. Hence, we proceed with this thesis, by connecting the neuron models, form different architectures and eventually deploying them in simulations, thus exploring their behaviour. The Leaky Integrate-And-Fire model is the one used in the simulations and also the most commonly used in general, enabling us to easily implement and visualize a plethora of meaningful properties of neural systems.
Concluding this work, we explore in detail the neural encoding and decoding processes by building a pipeline that converts an analog signal into spike trains and then reconstructs it from those spike trains. The encoding process uses a specific neural architecture as an encoder while the decoding procedure using signal processing techniques reconstructs the initial signal.
|