Metadata-Version: 2.4
Name: prt-rl
Version: 0.6.3
Summary: Python Research Toolkit - Reinforcement Learning
Author-email: Gavin Strunk <gavin.strunk@gmail.com>
License-File: LICENSE
Requires-Python: >=3.11
Requires-Dist: boto3>=1.38.3
Requires-Dist: cryptography>=46.0.5
Requires-Dist: gymnasium>=1.1.1
Requires-Dist: imageio>=2.37.0
Requires-Dist: inputs>=0.5
Requires-Dist: mlflow>=2.20.4
Requires-Dist: optuna>=4.3.0
Requires-Dist: pillow>=12.1.1
Requires-Dist: pygame>=2.6.1
Requires-Dist: pynput>=1.8.0
Requires-Dist: pynvml>=12.0.0
Requires-Dist: scipy>=1.15.2
Requires-Dist: torch>=2.6.0
Requires-Dist: tqdm>=4.67.1
Requires-Dist: vmas>=1.5.0
Description-Content-Type: text/markdown

<p align="center">
<picture>
<img src="docs/_static/prt-rl-logo-title.png" width="400" style="max-width: 100%;">
</picture>
</p>

**prt-rl** is part of the broader *Python Research Toolkit* ecosystem and provides a clean, mathematically grounded collection of reinforcement learning algorithms.  
Its primary goal is **clarity, pedagogy, and research exploration**—not raw performance.

This library is designed for researchers, students, and practitioners who want to understand *why* RL algorithms work, how their mathematics map to code, and what practical implementation details matter in real systems.

Unlike high-performance libraries such as **TorchRL**, **RLlib**, and **skrl**, **prt-rl focuses on transparency, composability, and conceptual depth**. Every algorithm is implemented with an emphasis on readability, modularity, and annotated code that highlights both the underlying equations and the implementation tips that make them work in practice.

>⚠️ **Note:** This repository is under active development. APIs, file structure, and module organization may change as the project evolves. Backward compatibility is not guaranteed until version 1.0.


## Documentation

**Installation, Getting Started, and API guides can be found in the full documentation:**

➡️ https://prt-rl.readthedocs.io/en/latest/

---

## Contributing

Contributions are welcome!  
Please open an issue before submitting a pull request so that new features or bug fixes can be discussed beforehand.