Metadata-Version: 2.4
Name: multi-agent-ale-py
Version: 0.1.12
Summary: Multi-Agent Arcade Learning Environment Python Interface
Home-page: https://github.com/Farama-Foundation/Multi-Agent-ALE
Author: Farama Foundation
Author-email: jkterry@farama.org
License: GPL
Keywords: reinforcement-learning,arcade-learning-environment,atari
Classifier: Development Status :: 5 - Production/Stable
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE.md
Requires-Dist: numpy
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary



# The Multi-Agent Arcade Learning Environment


## Overview

This is a fork of the [Arcade Learning Environment (ALE)](https://github.com/mgbellemare/Arcade-Learning-Environment). It is mostly backwards compatible with ALE and it also supports certain games with 2 and 4 players.

To install it in Python, please use `pip install multi-agent-ale-py`

Note: Some Linux distributions may require manual installation of `cmake`, `swig`, or `zlib1g-dev` (e.g., `sudo apt install cmake swig zlib1g-dev`)

## Citation

```
@article{terry2020multiplayer,
  title={Multiplayer support for the arcade learning environment},
  author={Terry, J K and Black, Benjamin and Santos, Luis},
  journal={arXiv preprint arXiv:2009.09341},
  year={2020}
}
```
