All analysed models in this thesis are evaluated on the dataset provided by the MICCAI 2015 Gland Segmentation Challenge Contest [article](https://arxiv.org/abs/1603.00275).
The dataset consists of 165 labelled colorectal cancer histological images. 85 images belong to the training set and 80 images are part of the test set, which is divided
into two subsets. Set A contains 60 images, while set B contains 20 images. The training set consists of 37 benign sections and 48 malignant areas. Test A set contains
33 benign sections and 27 malignant areas. Test B set has 4 benign sections and 16 malignant areas. Due to the characteristic of the SA-FCN architecture, authors have prepared
their own version of the labelled images. This is mainly aimed at adding information about the contours of the glands that is necessary to carry out the training of the model.
## Step by Step Detection Mask R-CNN
Figure below shows a few examples of samples from the dataset, with corresponding original labelling and the SA-FCN labelling version.

More informationa about the dataset as well as the scores description can be found on the contest [website](https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/).
Apart from all output information printed out in the console, you can also check correctness of the experiment by checking the output files. Below we will
list all produced files as well as their examples. Please notice that all output files from this reproducibility package are corresponing to figures and tables from the paper.
*Table 1. Classification (F1 Score) and segmentation (Dice Index) scores using three different post-processing methods. Scores are represented by mean value from five training
followed by the standard error of the mean. *
Table 1 is represented by three separate tables, describing each post-processing methods separately.



*Table 2. Comparison of the Mask R-CNN and the SA-FCN models’ performance in terms of classification (F1 Score) and segmentation (Dice Index). Scores are represented by mean
value from five training followed by the standard error of the mean.*
Table 2 is represented by two separate tables, describing each model scores separately.


*Fig. 4. Visualisation of three post-processing methods on the example of one sample. Images headers describe postprocessing actions applied on sample.*

*Fig. 5. Visualisation of the contour prediction of the sample, presenting the misalignment problem. The bottom right image show superposed ground truth and prediction
labelling. Yellow colour indicates ground truth pixels not overlapping with prediction, orange indicates prediction pixels not overlapping with the ground truth and
black colour is used to mark properly predicted pixels.*

*Fig. 6. Visualisation of the same sample prediction before and after post-processing for the Mask R-CNN and SA-FCN models.*

<!--
## Step by Step Detection Mask R-CNN
In this section we will provide a few tips how to run each part of the experiment separately. We will also provide examples of outputs.