Document-level joint entity and relation extraction is a challenging
information extraction problem that requires a unified approach where a single
neural network performs four sub-tasks: mention detection, coreference
resolution, entity classification, and relation extraction. Existing methods
often utilize a sequential multi-task learning approach, in which the arbitral
decomposition causes the current task to depend only on the previous one,
missing the possible existence of the more complex relationships between them.
In this paper, we present a multi-task learning framework with bidirectional
memory-like dependency between tasks to address those drawbacks and perform the
joint problem more accurately. Our empirical studies show that the proposed
approach outperforms the existing methods and achieves state-of-the-art results
on the BioCreative V CDR corpus.