A Knowledge-Driven Cross-view Contrastive Learning for EEG Representation. (arXiv:2310.03747v1 [eess.SP])
Due to the abundant neurophysiological information in the
electroencephalogram (EEG) signal, EEG signals integrated with deep learning
methods have gained substantial traction across numerous real-world tasks.
However, the development of supervised learning methods based on EEG signals
has been hindered by the high cost and significant label discrepancies to
manually label large-scale EEG datasets. Self-supervised frameworks are adopted
in vision and language fields to solve this issue, but the lack of EEG-specific
theoretical foundations hampers their applicability across various tasks. To
solve these challenges, this paper proposes a knowledge-driven cross-view
contrastive learning framework (KDC2), which integrates neurological theory to
extract effective representations from EEG with limited labels. The KDC2 method
creates scalp and neural views of EEG signals, simulating the internal and
external representation of brain activity. Sequentially, inter-view and
cross-view contrastive learning pipelines in combination with various
augmentation methods are applied to capture neural features from different
views. By modeling prior neural knowledge based on homologous neural
information consistency theory, the proposed method extracts invariant and
complementary neural knowledge to generate combined representations.
Experimental results on different downstream tasks demonstrate that our method
outperforms state-of-the-art methods, highlighting the superior generalization
of neural knowledge-supported EEG representations across various brain tasks.
Source: https://arxiv.org/abs/2310.03747