Reinforcement Learning: The Untapped Frontier in Algorithmic Trading, Spearheaded by Hariom Tatsat's in collaboration with Leihigh University
Photo Courtesy: Hariom Tatsat

Reinforcement Learning: The Untapped Frontier in Algorithmic Trading, Spearheaded by Hariom Tatsat’s in collaboration with Leihigh University

The concept of using artificial intelligence in finance is nothing new. However, the transformative capacity of combining reinforcement learning with algorithmic trading is a frontier that is just being tapped. This notion comes to life through the work of Hariom Tatsat, a recognized expert in machine learning, data science, and finance. He, in collaboration with the Masters in Financial Engineering Program and Leihigh University, Focussed on not just automating trades but making them understandable to stakeholders, especially traders, is worth a deep dive.

Why Reinforcement Learning and Algorithmic Trading Are a Match Made in Heaven

When asked what motivated him to integrate these two advanced fields, Hariom stated, “Reinforcement learning is a game-changer for the financial industry.” According to him, the technology is capable of creating intelligent trading agents that adapt and evolve within the complicated and volatile financial markets. The ability of reinforcement learning to continuously maximize rewards makes it ideally suited for portfolio management and algorithmic trading, tasks that are at the heart of modern finance.

Transparency and Trust: The Importance of Interpretability

Hariom Tatsat stresses the necessity for interpretability in machine learning models applied to finance. “Finance is an industry that thrives on trust,” he says. Models that execute automatic trades must be transparent, helping stakeholders understand the rationale behind each decision. This focus on interpretability goes beyond establishing trust; it also aids in the continual refinement of models, ensuring they adapt and improve over time.

A Window into Complexity: The Visualization Tool

The team has developed a visualization tool to make the complicated algorithms behind trading agents easier to understand. Built using Python, the Dash framework with GPU support, the tool allows traders to analyze buy, sell, and hold decisions in various market situations. With features like interactive plots, the tool offers traders a unique view into the agent’s learning process. 

Addressing the Practical Challenges

Trading involves its fair share of complexities and challenges. When asked about these, Hariom highlighted that the tool provides a visual interpretation of the model’s behavior, allowing traders to adjust their strategies accordingly. A case study using Vanguard’s ETF data from 2017 and 2018 served as further validation of their approach.

Insights and Key Takeaways

The project has already shed light on several critical aspects of applying reinforcement learning to trading. Sensitivity to the architecture of deep neural networks, the importance of hyperparameters, and the significance of discount factors are just a few of the insights gained. The team also found that visual interpretation is indispensable in confirming whether the models align with intended strategies.

The Future Beckons

As Igor Halperin at Fidelity Investments puts it, “RL is the best and most natural solution to most of the problems we have in quantitative finance.” Hariom Tatsat and his work with Leihigh univesity could serve as a catalyst for more extensive applications of reinforcement learning in finance, paving the way for more intelligent, adaptable, and transparent trading systems.

While still in its nascent stage, the marriage of reinforcement learning and algorithmic trading seems poised for a long and fruitful partnership, promising not only to enhance profits but also to democratize understanding and access in the world of finance.

Published by: Martin De Juan

Share this article

(Ambassador)

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of New York Weekly.