Uncovering Neural Scaling Laws in Molecular Representation Learning. (arXiv:2309.15123v1 [physics.chem-ph])

Molecular Representation Learning (MRL) has emerged as a powerful tool for
drug and materials discovery in a variety of tasks such as virtual screening
and inverse design. While there has been a surge of interest in advancing
model-centric techniques, the influence of both data quantity and quality on
molecular representations is not yet clearly understood within this field. In
this paper, we delve into the neural scaling behaviors of MRL from a
data-centric viewpoint, examining four key dimensions: (1) data modalities, (2)
dataset splitting, (3) the role of pre-training, and (4) model capacity. Our
empirical studies confirm a consistent power-law relationship between data
volume and MRL performance across these dimensions. Additionally, through
detailed analysis, we identify potential avenues for improving learning
efficiency. To challenge these scaling laws, we adapt seven popular data
pruning strategies to molecular data and benchmark their performance. Our
findings underline the importance of data-centric MRL and highlight possible
directions for future research.

Source: https://arxiv.org/abs/2309.15123

webmaster

Related post