1. 目前支持的算法
ElegantRL (website) is developed for practitioners with the following advantages:
Cloud-native: follows a cloud-native paradigm through microservice architecture and containerization, supporting ElegantRL-Podracer and FinRL-Podracer.
Scalable: fully exploits the parallelism of DRL algorithms at multiple levels, making it easily scale out to hundreds or thousands of computing nodes on a cloud platform, say, a DGX SuperPOD platform with thousands of GPUs.
Elastic: allows to elastically and automatically allocate computing resources on the cloud.
Lightweight: the core codes <1,000 lines (check Elegantrl_Helloworld).
Efficient: in many testing cases (single GPU/multi-GPU/GPU cloud), we find it more efficient than Ray RLlib.
Stable: much much much more stable than Stable Baselines 3 by utilizing various ensemble methods.
ElegantRL implements the following model-free deep reinforcement learning (DRL) algorithms:
DDPG, TD3, SAC, PPO, REDQ for continuous actions in single-agent environment,
DQN, Double DQN, D3QN, SAC for discrete actions in single-agent environment,
QMIX, VDN, MADDPG, MAPPO, MATD3 in multi-agent environment.
For the details of DRL algorithms, please check out the educational webpage OpenAI Spinning Up.
ElegantRL supports the following simulators:
Isaac Gym for massively parallel simulation,
OpenAI Gym, MuJoCo, PyBullet, FinRL for benchmarking.
2. 文档
3. Install
ElegantRL generally requires:
Python>=3.6
PyTorch>=1.0.2
gym, matplotlib, numpy, pybullet, torch, opencv-python, box2d-py.
You can simply install ElegantRL from PyPI with the following command:
pip3 install erl --upgrade
Or install with the newest version through GitHub:
git clone https://github.com/AI4Finance-Foundation/ElegantRL.git
cd ElegantRL
pip3 install .