Large language models (LLMs) have achieved significant performance in many
fields such as reasoning, language understanding, and math problem-solving, and
are regarded as a crucial step to artificial general intelligence (AGI).
However, the sensitivity of LLMs to prompts remains a major bottleneck for
their daily adoption. In this paper, we take inspiration from psychology and
propose EmotionPrompt to explore emotional intelligence to enhance the
performance of LLMs. EmotionPrompt operates on a remarkably straightforward
principle: the incorporation of emotional stimulus into prompts. Experimental
results demonstrate that our method, using the same single prompt templates,
significantly outperforms original zero-shot prompt and Zero-shot-CoT on 8
tasks with diverse models: ChatGPT, Vicuna-13b, Bloom, and T5. Further,
EmotionPrompt was observed to improve both truthfulness and informativeness. We
believe that EmotionPrompt heralds a novel avenue for exploring
interdisciplinary knowledge for humans-LLMs interaction.