Deep learning in medical image analysis : a comparative analysis of multi-modal brain-MRI segmentation with 3D deep neural networks

Volumetric segmentation in magnetic resonance images is mandatory for the diagnosis, monitoring, and treatment planning. Manual practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human factor. Automated segmentation can save physicians time and p...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριος συγγραφέας: Αδάλογλου, Νικόλαος
Άλλοι συγγραφείς: Δερματάς, Ευάγγελος
Μορφή: Thesis
Γλώσσα:English
Έκδοση: 2019
Θέματα:
Διαθέσιμο Online:http://hdl.handle.net/10889/12754
Περιγραφή
Περίληψη:Volumetric segmentation in magnetic resonance images is mandatory for the diagnosis, monitoring, and treatment planning. Manual practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human factor. Automated segmentation can save physicians time and provide an accurate reproducible solution for further analysis. In this thesis, automated brain segmentation from multi-modal 3D magnetic resonance images (MRIs) is studied. An extensive comparative analysis of state-of-the-art 3D deep neural networks for brain sub-region segmentation is performed. We start by describing the fundamentals of MR Imaging because it is crucial to understand your input data to train a deep architecture. Then, we provide the reader with an overview of how deep learning works by extensively analyzing every component (layer) of a deep network. After we study the fields of magnetic resonance and deep learning separately, we attempt give a broader perspective of the intersection of this two fields with a different range of application of deep networks, from MR image reconstruction to medical image generation. Our work is focused on multi-modal brain segmentation. For our experiments, we used two common benchmark datasets from medical image challenges. Brain MR segmentation challenges aim to evaluate state-of-the-art methods for the segmentation of brain by providing a 3D MRI dataset with ground truth tumor segmentation labels annotated by physicians. In order to evaluate state-of-the-art 3D architectures, we briefly analyze the author’s approaches, as well as to provide the reader with an intuition behind the design choices. We perform a comparative analysis of the baseline architectures through extensive evaluations. The implemented networks were based on the specifications of the original papers. Finally, we discuss the reported results and provide future directions for implementing an open-source medical segmentation library in PyTorch along with data loaders of the most common medical MRI datasets. The goal is to produce a 3D deep learning library for medical imaging related tasks. We strongly believe in open and reproducible deep learning research. In order to reproduce our results, the code (alpha release) and materials of this thesis are available in https://github.com/black0017/MedicalZooPytorch