Deep Policy Gradient Methods in Commodity Markets. (arXiv:2308.01910v1 [q-fin.TR])
The energy transition has increased the reliance on intermittent energy
sources, destabilizing energy markets and causing unprecedented volatility,
culminating in the global energy crisis of 2021. In addition to harming
producers and consumers, volatile energy markets may jeopardize vital
decarbonization efforts. Traders play an important role in stabilizing markets
by providing liquidity and reducing volatility. Several mathematical and
statistical models have been proposed for forecasting future returns. However,
developing such models is non-trivial due to financial markets’ low
signal-to-noise ratios and nonstationary dynamics.
This thesis investigates the effectiveness of deep reinforcement learning
methods in commodities trading. It formalizes the commodities trading problem
as a continuing discrete-time stochastic dynamical system. This system employs
a novel time-discretization scheme that is reactive and adaptive to market
volatility, providing better statistical properties for the sub-sampled
financial time series. Two policy gradient algorithms, an actor-based and an
actor-critic-based, are proposed for optimizing a transaction-cost- and
risk-sensitive trading agent. The agent maps historical price observations to
market positions through parametric function approximators utilizing deep
neural network architectures, specifically CNNs and LSTMs.
On average, the deep reinforcement learning models produce an 83 percent
higher Sharpe ratio than the buy-and-hold baseline when backtested on
front-month natural gas futures from 2017 to 2022. The backtests demonstrate
that the risk tolerance of the deep reinforcement learning agents can be
adjusted using a risk-sensitivity term. The actor-based policy gradient
algorithm performs significantly better than the actor-critic-based algorithm,
and the CNN-based models perform slightly better than those based on the LSTM.
Source: https://arxiv.org/abs/2308.01910