Invertible neural networks
In the present dissertation we are dealing with a small but quite important area. Our goal is to delve into the world of invertible neural networks. Invertible neural networks constitute a special class of networks that try to solve the problem of bijective function approximation, to learn the mappi...
Main Author: | |
---|---|
Other Authors: | |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10889/23921 |
id |
nemertes-10889-23921 |
---|---|
record_format |
dspace |
spelling |
nemertes-10889-239212022-11-15T04:35:40Z Invertible neural networks Αντιστρέψιμα νευρωνικά δίκτυα Μπίφης, Αριστείδης Bifis, Aristeidis Machine learning Deep learning Autoencoders Invertible neural networks Variational autoencoders Moore–Penrose inverse Μηχανική μάθηση Βαθειά μάθηση Αυτοκωδικοποιητές In the present dissertation we are dealing with a small but quite important area. Our goal is to delve into the world of invertible neural networks. Invertible neural networks constitute a special class of networks that try to solve the problem of bijective function approximation, to learn the mapping between the input and the output and vice versa, mostly known due to Normalizing Flows. Main reasons of researching invertible neural networks include producing generative models via invertible mappings with exact likelihoods, the preservation of the mutual information, the acquisition of a more memory-efficient backpropagation as well as the analysis of inverse problems. We specifically focus on the case of typical autoencoders as well variational autoencoders, a very specific group of invertible neural networks, where the input and the output are identical. Autoencoders and variational autoencoders project input data into the space of representations, which we can think of as the straight transformation, and then try to reconstruct input data from said representations, which symbolizes the inverse transformation. We propose a new architecture of autoencoders which utilize pseudoinverse matrices, minimizing the number of trainable parameters and memory cost of backpropagation to half of typical autoencoders. We also propose a new approach of training variational autoencoders, using two invertible architectures based on pseudoinverse matrices and show that with this technique, we acquire generative models capable of generalizing when fed with random noise input. 2022-11-14T11:44:53Z 2022-11-14T11:44:53Z 2022-01-26 2022 https://hdl.handle.net/10889/23921 en application/pdf |
institution |
UPatras |
collection |
Nemertes |
language |
English |
topic |
Machine learning Deep learning Autoencoders Invertible neural networks Variational autoencoders Moore–Penrose inverse Μηχανική μάθηση Βαθειά μάθηση Αυτοκωδικοποιητές |
spellingShingle |
Machine learning Deep learning Autoencoders Invertible neural networks Variational autoencoders Moore–Penrose inverse Μηχανική μάθηση Βαθειά μάθηση Αυτοκωδικοποιητές Μπίφης, Αριστείδης Invertible neural networks |
description |
In the present dissertation we are dealing with a small but quite important area. Our goal is to delve into the world of invertible neural networks. Invertible neural networks constitute a special class of networks that try to solve the problem of bijective function approximation, to learn the mapping between the input and the output and vice versa, mostly known due to Normalizing Flows. Main reasons of researching invertible neural networks include producing generative models via invertible mappings with exact likelihoods, the preservation of the mutual information, the acquisition of a more memory-efficient backpropagation as well as the analysis of inverse problems.
We specifically focus on the case of typical autoencoders as well variational autoencoders, a very specific group of invertible neural networks, where the input and the output are identical. Autoencoders and variational autoencoders project input data into the space of representations, which we can think of as the straight transformation, and then try to reconstruct input data from said representations, which symbolizes the inverse transformation.
We propose a new architecture of autoencoders which utilize pseudoinverse matrices, minimizing the number of trainable parameters and memory cost of backpropagation to half of typical autoencoders. We also propose a new approach of training variational autoencoders, using two invertible architectures based on pseudoinverse matrices and show that with this technique, we acquire generative models capable of generalizing when fed with random noise input. |
author2 |
Bifis, Aristeidis |
author_facet |
Bifis, Aristeidis Μπίφης, Αριστείδης |
author |
Μπίφης, Αριστείδης |
author_sort |
Μπίφης, Αριστείδης |
title |
Invertible neural networks |
title_short |
Invertible neural networks |
title_full |
Invertible neural networks |
title_fullStr |
Invertible neural networks |
title_full_unstemmed |
Invertible neural networks |
title_sort |
invertible neural networks |
publishDate |
2022 |
url |
https://hdl.handle.net/10889/23921 |
work_keys_str_mv |
AT mpiphēsaristeidēs invertibleneuralnetworks AT mpiphēsaristeidēs antistrepsimaneurōnikadiktya |
_version_ |
1771297220474175488 |