Code Representation Pre-training with Complements from Program Executions. (arXiv:2309.09980v1 [cs.SE])

Large language models (LLMs) for natural language processing have been
grafted onto programming language modeling for advancing code intelligence.
Although it can be represented in the text format, code is syntactically more
rigorous in order to be properly compiled or interpreted to perform a desired
set of behaviors given any inputs. In this case, existing works benefit from
syntactic representations to learn from code less ambiguously in the forms of
abstract syntax tree, control-flow graph, etc. However, programs with the same
purpose can be implemented in various ways showing different syntactic
representations while the ones with similar implementations can have distinct
behaviors. Though trivially demonstrated during executions, such semantics
about functionality are challenging to be learned directly from code,
especially in an unsupervised manner. Hence, in this paper, we propose
FuzzPretrain to explore the dynamic information of programs revealed by their
test cases and embed it into the feature representations of code as
complements. The test cases are obtained with the assistance of a customized
fuzzer and are only required during pre-training. FuzzPretrain yielded more
than 6%/9% mAP improvements on code search over its counterparts trained with
only source code or AST, respectively. Our extensive experimental results show
the benefits of learning discriminative code representations with program
executions.

Source: https://arxiv.org/abs/2309.09980

webmaster

Related post