Learning Emergent Discrete Message Communication for Cooperative Reinforcement Learning. (arXiv:2102.12550v1 [cs.LG])

Communication is a important factor that enables agents work cooperatively in
multi-agent reinforcement learning (MARL). Most previous work uses continuous
message communication whose high representational capacity comes at the expense
of interpretability. Allowing agents to learn their own discrete message
communication protocol emerged from a variety of domains can increase the
interpretability for human designers and other agents.This paper proposes a
method to generate discrete messages analogous to human languages, and achieve
communication by a broadcast-and-listen mechanism based on self-attention. We
show that discrete message communication has performance comparable to
continuous message communication but with much a much smaller vocabulary
size.Furthermore, we propose an approach that allows humans to interactively
send discrete messages to agents.

Source: https://arxiv.org/abs/2102.12550


Related post