“Finance serves a purpose. … Investors are lured to gamble their wealth on wide hunches originated by charlatans and encouraged by mass media. One day in the near future, ML will dominate finance, science will curtail guessing, and investing will not mean gambling.” —-By Marcos Lopes De Prado Advanced in Financial Machine Learning
Our Mission: to efficiently automate trading. We continuously develop and share codes for finance.
Our Vision: AI community has accumulated an open-source code ocean over the past decade. We believe applying these intellectual and engineering properties to finance will initiate a paradigm shift from the conventional trading routine to an automated machine learning approach, even RLOps in finance.
Materials:
AI4Finance Foundation:
FinRL, FinRL-Meta, and Website
ElegantRL and Website.
FinRL Ecosystem: Deep Reinforcement Learning to Automate Trading in Quantitative Finance. Talk at Wolfe Research 5th Annual Virtual Global Quantitative and Macro Investment Conference, Nov. 08, 2021.
Awesome_DRL4Finance_List: Awesome Deep Reinforcement Learning in Finance
Textbooks:
De Prado, M.L., 2018. Advances in financial machine learning. John Wiley & Sons.
Assessing this file on Google Doc at:
https://docs.google.com/document/d/1FxfdiwJ8L8xJeObPMIVFxi9ozykC9HujMuxYVYEmR5g/edit
Contributors: (Please add your information)
Jiechao Gao, Xiao-Yang Liu, Bruce Yang, Christina Dan Wang, Roberto Fray da Silva, Astarag Mohapatra, Marc Hollyoak, Jingyang Rui, Ming Zhu, Dan Yang, Mao Guan, Markus Kefeder, Ziyi Xia, Shixun Wu, Momin Haider, David Wilt, Berend Gort, Liuqing Yang
FinRL-Meta’s Goal:
1. FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy and provides open-source data engineering tools for financial big data.
2. FinRL-Meta provides hundreds of market environment simulations for various trading tasks.
3. FinRL-Meta enables multiprocessing simulation and training by exploiting thousands of GPU cores.
● Designing Principles
1. DataOps for DataDriven DRL in Finance. The DataOps paradigm is adopted to the data engineering pipeline, providing agility to agent deployment.
2. Layered Structure & Extensibility. A layered structure specialized for RL in finance. This specialized structure realizes the extensibility of FinRL-Meta.
3. Plug-and-Play. Any DRL agent can be directly plugged into the environments, then trained and tested. Different agents can run on the same benchmark environment for a fair comparison.
● Framework of FinRL-Meta
FinRL-Meta consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is relatively independent.
1. For the data layer, we use a unified data processor to access data, clean data, and extract features.
2. For the environment layer, we incorporate trading constraints and model market frictions to reduce the simulation to reality gap.
3. For the agent layer, three DRL libraries (ElegantRL, RLlib, Stable-Baselines3) are directly supported, while others can also be plugged in.
C. ElegantRL
● Goals of ElegantRL
ElegantRL is designed for researchers and practitioners with finance-oriented optimizations.
1. ElegantRL implements state-of-the-art DRL algorithms from scratch, including both discrete and continuous ones, and provides user-friendly tutorials in Jupyter Notebooks.
2. The ElegantRL performs DRL algorithms under the Actor-Critic framework
3. The ElegantRL library enables researchers and practitioners to pipeline the disruptive “design, development and deployment” of DRL technology.
● Designing Principles
1. Lightweight: core codes have less than 1,000 lines, less dependable packages, only using PyTorch (train), OpenAI Gym (env), NumPy, Matplotlib (plot),
2. Efficient: in many testing cases, we find it more efficient than Ray RLlib. ElegantRL provides a cloud-native solution for RLOps in finance.
3. Stable: much more stable than Stable Baselines 3. Stable Baselines 3 can only use a single GPU, but ElegantRL can use 1~8 GPUs for stable training.
● Framework of ElegantRL
ElegantRL implements the following model-free deep reinforcement learning (DRL) algorithms:
● DDPG, TD3, SAC, PPO, PPO (GAE),REDQ for continuous actions
● DQN, DoubleDQN, D3QN, SAC for discrete actions
● QMIX, VDN; MADDPG, MAPPO, MATD3 for multi-agent environment
For the details of DRL algorithms, please check out the educational webpage OpenAI Spinning Up.
1.3 Requirements
Python:
● Confidence with Python programming, and familiar with Jupyter notebook, and Pycharm
● Familiar with Python scripts and executing them from the command line interface
● Familiar with numerical computing libraries: Numpy, and pandas.
Git and Github:
● Knowledge of basic Git commands
● Clone,fork, branch creation and checkout
● Git status, git add, git commit, git pull and git push
Software:
● Python and Anaconda Installation
● Git installation or Github desktop
Account:
● Github account
● Cloud: AWS account or Google Account
● Paper trading account: alpaca, binance