Paper ID: 2309.00630

Commodities Trading through Deep Policy Gradient Methods

Jonas Hanetho

Algorithmic trading has gained attention due to its potential for generating superior returns. This paper investigates the effectiveness of deep reinforcement learning (DRL) methods in algorithmic commodities trading. It formulates the commodities trading problem as a continuous, discrete-time stochastic dynamical system. The proposed system employs a novel time-discretization scheme that adapts to market volatility, enhancing the statistical properties of subsampled financial time series. To optimize transaction-cost- and risk-sensitive trading agents, two policy gradient algorithms, namely actor-based and actor-critic-based approaches, are introduced. These agents utilize CNNs and LSTMs as parametric function approximators to map historical price observations to market positions.Backtesting on front-month natural gas futures demonstrates that DRL models increase the Sharpe ratio by $83\%$ compared to the buy-and-hold baseline. Additionally, the risk profile of the agents can be customized through a hyperparameter that regulates risk sensitivity in the reward function during the optimization process. The actor-based models outperform the actor-critic-based models, while the CNN-based models show a slight performance advantage over the LSTM-based models.

Submitted: Aug 10, 2023