Insights into the behavior of multi-task deep neural networks for medical image segmentation - Reproducibility Package
This repository is a reproducibility package for the paper "Insights into the behavior of multi-task deep neural networks for medical image segmentation” published in: 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). Paper has can be found here
Reproducibility package consists two main parts. First one is devotedt to Mask R-CNN architecture, while the second one to SA-FCN architecture. Provided code allows to reproduce all tables and figures placed in the paper, as well as experiment with the training, predicting and post-processing methods.
- Introduction
- Mask R-CNN
- SA-FCN
- Cite Paper
- Installation, tutorials and documentation
- Dataset
- Step by Step Detection Mask R-CNN
- Step by Step Detection SA-FCN
- Authors
- License
- Acknowledgments
Introduction
Glandular morphology is used by pathologists to assess the malignancy of different adenocarcinomas. This process involves conducting gland segmentation task. The common approach in specialised domains, such as medical imaging, is to design complex architectures in a multi-task learning setup. Generally, these approaches rely on substantial postprocessing efforts. Moreover, a predominant notion is that general purpose models are not suitable for gland instance segmentation. We analyse the behaviour of two architectures: SA-FCN and Mask R-CNN. We compare the impact of post-processing on the final predictive results and the performance of generic and specific models for the gland segmentation problem. Our results highlight the dependency of post-processing on tailored models as well as comparable results when using a generic model. Thus, in the interest of time, it is worth considering to use and improve generic models as opposed to design complex architectures when tackling new domains.
Cite Paper
If you use the package, please cite the following works:
L. T. Bienias, J. R. Guillamón, L. H. Nielsen and T. S. Alstrøm, "Insights Into The Behaviour Of Multi-Task Deep Neural Networks For Medical Image Segmentation," 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), Pittsburgh, PA, USA, 2019, pp. 1-6.
Bibtex entry:
@inproceedings{bienias2019insights,
title={Insights into the behaviour of multi-task deep neural networks for medical image segmentation},
author={Bienias, Lukasz T and Nielsen, Line H and Alstr{\o}m, Tommy S and others},
booktitle={2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP)},
pages={1--6},
year={2019},
organization={IEEE}
}
For more information about our research group please visit Section for Cognitive Systems website at Technical University of Denamrk (Denmark).
We are interested in feedback and error reporting. Please contact us via email or open an issue in the repository if you have any kind of problem, comment, suggestion or found a mistake.
Mask R-CNN
The Mask R-CNN architecture is described in the article.
In the article we used implementation written in Tensorflow framework, which comes from the repository. The code has been adjusted to our purposes.
SA-FCN
The SA-FCN architecture is described in the article.
The package consists of implementation written in Pytorch framework, following the information included in the article as well as the Lua implementation, provided by the original article authors.
Installation, tutorials and documentation
This is a quick installation guide.
Installation requirements
In order to use the package you need to have properly prepared environment. It is necessary to have the following packages installed:
- Python 3
- Cuda
- Tensorflow
- Pytorch ...
and many more, which are listed in 'requirements.txt' files (each architecture has its own list of necessary dependecies).
You can install all dependencies by:
conda install --file requirements.txt
Download Package
To download package you can simply clone this GitLab repository by using the following command:
$ git clone https://lab.compute.dtu.dk/lutobi/mlsp2019_software_package/
All the contents of the repository can also be downloaded from the GitHub site by using the "Download ZIP" button.
Run everythinh
In order to run end to end experiment in order to generate all tables and figures from the paper, including the following activities:
- Mask R-CNN part:
- traning 5 separate models
- generating 5 predictions for each sample, based on 5 different models
- carrying out post processing of the samples and calculating final scores
- generating table consisting of scores
- SA-FCN part:
- traning 5 separate models
- generating 5 predictions for each sample, based on 5 different models
- carrying out post processing of the samples, saving post processed samples and calculating final scores
- generating table consisting of scores
- generating Figure 4, Figure 5 and Figure 6 from the paper
- generating Tbale 1 and Table 2 from the paper
please follow steps:
- Log in to DTU Compute cluster via ThinLinc.
- Open gterm terminal.
- Log in to one of the GPUs available for instance:
ssh titan11
- Activate your environment, for instance:
conda activate lutobi
- Check which node is available:
gpustat
- Check in to the available nodes, for instance:
export CUDA\_VISIBLE\_DEVICES="0,1"
- Go to directory of the downloaded repo, for instance:
cd /dtu-compute/s162377/mlsp2019_software_package/
- Open file 'mask_rcnn/run_maskrcnn.sh' and check if the dataset path is properly defined.
- Check if you are on correct branch on the repo.
- Run the bash script by calling:
./run_all.sh
Run Mask R-CNN
In order to run end to end experiment for Mask R-CNN, which consists of:
- traning 5 separate models
- generating 5 predictions for each sample, based on 5 different models
- carrying out post processing of the samples and calculating final scores
- generating table consisting of scores
please follow steps:
- Log in to DTU Compute cluster via ThinLinc.
- Open gterm terminal.
- Log in to one of the GPUs available for instance:
ssh titan11
- Activate your environment, for instance:
conda activate lutobi
- Check which node is available:
gpustat
- Check in to the available nodes, for instance:
export CUDA\_VISIBLE\_DEVICES="0,1"
- Go to directory of the downloaded repo, for instance:
cd /dtu-compute/s162377/mlsp2019_software_package/mask_rcnn_git/
- Open file run_maskrcnn.sh and check if the dataset path is properly defined.
- Check if you are on correct branch on the repo.
- Run the bash script by calling:
./run_maskrcnn.sh
Run SA-FCN
In order to run end to end experiment for SA-FCN, which consists of:
- traning 5 separate models
- generating 5 predictions for each sample, based on 5 different models
- carrying out post processing of the samples, saving post processed samples and calculating final scores
- generating table consisting of scores
please follow steps:
- Log in to DTU Compute cluster via ThinLinc.
- Open gterm terminal.
- Log in to one of the GPUs available for instance:
ssh titan11
- Activate your environment, for instance:
source activate s162377
- Check which node is available:
gpustat
- Check in to the available nodes, for instance:
export CUDA\_VISIBLE\_DEVICES="1"
- Go to directory of the downloaded repo, for instance:
cd /dtu-compute/s162377/mlsp2019_software_package/sa_fcn_thesis/
- Check if you are on correct branch on the repo.
- Run the bash script by calling:
./run_safcn.sh
For more information on package content can be found in the documentation file.
Documentation
Please find documentation here.
Dataset
Step by Step Detection Mask R-CNN
Training
Predicting
Post Processing
Scores
Outputs
Step by Step Detection SA-FCN
Training
Predicting
Post Processing
Scores
Outputs
Authors
1 Lukasz T. Bienias lutobi@dtu.dk
1 Juanjo R. Guillamon jugu@dtu.dk
2 Line H. Nielsen lihan@dtu.dk
1 Tommy S. Alstrøm tsal@dtu.dk
1 Department of Applied Mathematics and Computer Science Technical University of Denmark, Richard Petersens Plads 324, 2800 Kgs. Lyngby, Denmark
2 Department of Health Technology Technical University of Denmark, Ørsteds Plads 345C, 2800, Kgs. Lyngby, Denmark
License
Copyright 2020 Lukasz Tomasz Bienias
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Acknowledgments
This research was funded by the IDUN Center of Excellence supported by the Danish National Research Foundation (DNRF122) and the Velux Foundations (Grant No. 9301). We also thank NVIDIA corporation for donating GPU.