Metadata-Version: 2.4
Name: sustainable-foraging
Version: 2.0.0
Summary: Sustainable Foraging environment for multi-agent RL
Author: Filippos Christianos
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/pixel-87/sustainable-foraging
Project-URL: Repository, https://github.com/pixel-87/sustainable-foraging
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy
Requires-Dist: gymnasium
Requires-Dist: pyglet<2
Requires-Dist: pettingzoo
Requires-Dist: xuance>=1.4.1
Requires-Dist: matplotlib>=3.9.4
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: torch; extra == "dev"
Requires-Dist: stable-baselines3; extra == "dev"
Requires-Dist: supersuit; extra == "dev"
Requires-Dist: tensorboard; extra == "dev"
Requires-Dist: setuptools>=68.0.0; extra == "dev"
Requires-Dist: matplotlib; extra == "dev"
Provides-Extra: test
Requires-Dist: pytest; extra == "test"
Provides-Extra: train
Requires-Dist: torch; extra == "train"
Requires-Dist: stable-baselines3; extra == "train"
Requires-Dist: supersuit; extra == "train"
Requires-Dist: tensorboard; extra == "train"
Requires-Dist: setuptools>=68.0.0; extra == "train"
Provides-Extra: rllib
Requires-Dist: ray[rllib]>=2.9; extra == "rllib"
Requires-Dist: torch>=2.8.0; extra == "rllib"
Dynamic: license-file

# Sustainable Foraging Benchmark

A reproducible benchmark for comparing multi-agent RL algorithms on the Sustainable Foraging environment (PettingZoo AEC API).

Forked from [lb-foraging](https://github.com/semitable/lb-foraging). Built on the Level-Based Foraging framework, adapted for sustainable foraging research.

<p align="center">
  <img width="450px" src="docs/img/lbf.gif" align="center" alt="Sustainable Foraging" />
</p>

## Install

```sh
git clone https://github.com/pixel-87/sustainable-foraging.git
cd sustainable-foraging
uv sync
```

## Quick Start

```sh
# Train with stable-baselines3 (choose preset: easy, fair, hard)
python -m scripts.train_sb3 --preset fair

# Run inference with a trained model
python -m scripts.inference_sb3 --preset fair --model logs/<run_name>/model

# List available presets and settings
python -m scripts.train_sb3 --list-presets
```

The benchmark protocol is documented in `docs/benchmark_protocol.md`.

## Creating Environments

This is a PettingZoo AEC environment:

```python
from sustainable_foraging.foraging import AECForagingEnv

env = AECForagingEnv(
    players=2,
    field_size=(8, 8),
    max_num_food=2,
    sight=8,
    max_episode_steps=500,
)

env.reset()
while env.agents:
    agent = env.agent_selection
    obs, reward, terminated, truncated, info = env.last()
    action = env.action_space(agent).sample()
    env.step(action)
    env.render()
```

## Citation

If you use this benchmark, please cite the original work:

```bibtex
@inproceedings{christianos2020shared,
  title={Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning},
  author={Christianos, Filippos and Schäfer, Lukas and Albrecht, Stefano V},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year={2020}
}
```

```bibtex
@inproceedings{papoudakis2021benchmarking,
  title={Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks},
  author={Georgios Papoudakis and Filippos Christianos and Lukas Schäfer and Stefano V. Albrecht},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS)},
  year={2021}
}
```

## Contributing

Contributions are welcome! Please open an issue to discuss changes before submitting PRs.

## License

Apache License 2.0 - see [LICENSE](LICENSE) for details.
