# **Insights into the behavior of multi-task deep neural networks for medical image segmentation - Reproducibility Package**
This repository is a reproducibility package for the paper "Insights into the behavior of multi-task deep neural networks for medical image segmentation” published
in: 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). Paper has can be found [here](https://ieeexplore.ieee.org/document/8918753)
in: 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). Paper has can be found [here](https://ieeexplore.ieee.org/document/8918753).
Reproducibility package consists two main parts. First one is devotedt to Mask R-CNN architecture, while the second one to SA-FCN architecture.
Provided code allows to reproduce all tables and figures placed in the paper, as well as experiment with the training, predicting and post-processing methods.
Reproducibility package consists two main parts. First one is devoted to Mask R-CNN architecture, while the second one to SA-FCN architecture.
Provided code allows to reproduce all tables and figures from the paper, as well as experiment with the training, prediction and post-processing methods.
-[Introduction](#introduction)
-[Mask R-CNN](#maskrcnn)
-[SA-FCN](#safcn)
-[Cite Paper](#cite-uva)
-[Mask R-CNN](#mask-r-cnn)
-[SA-FCN](#sa-fcn)
-[Cite Paper](#cite-paper)
-[Installation, tutorials and documentation](#installation-tutorials-and-documentation)
If you use the package, please cite the following works:
If you use the package, please cite the following paper:
```
L. T. Bienias, J. R. Guillamón, L. H. Nielsen and T. S. Alstrøm, "Insights Into The Behaviour Of Multi-Task Deep Neural Networks For Medical Image Segmentation," 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), Pittsburgh, PA, USA, 2019, pp. 1-6.
...
...
@@ -81,7 +81,7 @@ adjusted to our purposes.
The SA-FCN architecture is described in the [article](https://arxiv.org/abs/1706.04737).
The package consists of implementation written in Pytorch framework, following the information included in the article as well as the Lua implementation,
@@ -93,7 +93,7 @@ In order to use the package you need to have properly prepared environment. It i
* Python 3
* Cuda
* Tensorflow
* Pytorch ...
* Pytorch
and many more, which are listed in 'requirements.txt' files (each architecture has its own list of necessary dependecies).
...
...
@@ -123,7 +123,7 @@ In order to run end to end experiment in order to generate all tables and figure
* carrying out post processing of the samples, saving post processed samples and calculating final scores
* generating table consisting of scores
* generating Figure 4, Figure 5 and Figure 6 from the paper
* generating Tbale 1 and Table 2 from the paper
* generating Table 1 and Table 2 from the paper
please follow steps:
1. Log in to DTU Compute cluster via ThinLinc.
...
...
@@ -259,11 +259,11 @@ followed by the standard error of the mean. *
Table 1 is represented by three separate tables, describing each post-processing methods separately.






*Table 2. Comparison of the Mask R-CNN and the SA-FCN models’ performance in terms of classification (F1 Score) and segmentation (Dice Index). Scores are represented by mean
value from five training followed by the standard error of the mean.*
...
...
@@ -271,23 +271,23 @@ value from five training followed by the standard error of the mean.*
Table 2 is represented by two separate tables, describing each model scores separately.




*Fig. 4. Visualisation of three post-processing methods on the example of one sample. Images headers describe postprocessing actions applied on sample.*


*Fig. 5. Visualisation of the contour prediction of the sample, presenting the misalignment problem. The bottom right image show superposed ground truth and prediction
labelling. Yellow colour indicates ground truth pixels not overlapping with prediction, orange indicates prediction pixels not overlapping with the ground truth and
black colour is used to mark properly predicted pixels.*


*Fig. 6. Visualisation of the same sample prediction before and after post-processing for the Mask R-CNN and SA-FCN models.*