With the increase in available large clinical and experimental datasets,
there has been substantial amount of work being done on addressing the
challenges in the area of biomedical image analysis. Image segmentation, which
is crucial for any quantitative analysis, has especially attracted attention.
Recent hardware advancement has led to the success of deep learning approaches.
However, although deep learning models are being trained on large datasets,
existing methods do not use the information from different learning epochs
effectively. In this work, we leverage the information of each training epoch
to prune the prediction maps of the subsequent epochs. We propose a novel
architecture called feedback attention network (FANet) that unifies the
previous epoch mask with the feature map of the current training epoch. The
previous epoch mask is then used to provide a hard attention to the learnt
feature maps at different convolutional layers. The network also allows to
rectify the predictions in an iterative fashion during the test time. We show
that our proposed feedback attention model provides a substantial improvement
on most segmentation metrics tested on seven publicly available biomedical
imaging datasets demonstrating the effectiveness of the proposed FANet.