Abstract:Conventional multi-task deep networks typically share most of the layers (i.e., layers for feature representations) across all tasks, which may limit their data fitting ability, as specificities of different tasks are inevitably ignored. This study proposes a hierarchically-fused multi-task fully-convolutional network, called HFFCN, to tackle the challenging task of prostate segmentation in CT images. Specifically, prostate segmentation is formulated into a multi-task learning framework, which includes a main task to segment prostate, and a supplementary task to regress prostate boundary. Here, the second task is applied to accurately delineating the boundary of the prostate, which is very unclear in CT images. Accordingly, the HFFCN uses a two-branch structure consisting of a shared encoding path and two complementary decoding paths. In contrast to the conventional multi-task networks, an information sharing (IS) module is also proposed to communicate at each level between the two decoding branches, by which the HFFCN endows the ability to learn hierarchically the complementary feature representations for different tasks, and also simultaneously preserve the specificities of learned feature representations for different tasks. The HFFCN is comprehensively evaluated on a large CT image dataset, including 313 images acquired from 313 patients. The experimental results demonstrate that the proposed HFFCN outperforms both the state-of-the-art segmentation methods and the conventional multi-task learning methods.