Metadata-Version: 2.4
Name: keep_gpu
Version: 0.3.1
Summary: Keep GPU is a simple CLI app that keeps your GPUs running
Author-email: Siyuan Wang <sywang0227@gmail.com>, Yaorui Shi <yaoruishi@gmail.com>, Yiduck Liu <geodesicseiran@gmail.com>
Maintainer-email: Siyuan Wang <sywang0227@gmail.com>, Yaorui Shi <yaoruishi@gmail.com>, Yiduck Liu <geodesicseiran@gmail.com>
License: MIT license
Project-URL: bugs, https://github.com/Wangmerlyn/KeepGPU/issues
Project-URL: changelog, https://github.com/Wangmerlyn/KeepGPU/blob/master/changelog.md
Project-URL: homepage, https://github.com/Wangmerlyn/KeepGPU
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pynvml
Requires-Dist: typer
Requires-Dist: torch
Requires-Dist: colorlog
Provides-Extra: dev
Requires-Dist: coverage; extra == "dev"
Requires-Dist: mypy; extra == "dev"
Requires-Dist: pytest; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: black; extra == "dev"
Requires-Dist: pre-commit; extra == "dev"
Requires-Dist: bump-my-version; extra == "dev"
Dynamic: license-file

# Keep GPU

[![PyPI Version](https://img.shields.io/pypi/v/keep-gpu.svg)](https://pypi.python.org/pypi/keep-gpu)
[![Docs Status](https://readthedocs.org/projects/keepgpu/badge/?version=latest)](https://keepgpu.readthedocs.io/en/latest/?version=latest)

**Keep GPU** is a simple CLI app that keeps your GPUs running.

- 🧾 License: MIT
- 📚 Documentation: https://keepgpu.readthedocs.io

---

Contributions Welcome!

If you have ideas for new features or improvements, feel free to open an issue or submit a pull request.

This project does not yet fully support ROCm GPUs, so any contributions, suggestions, or testing help in that area are especially welcome!

---

## Features

- Simple command-line interface
- Uses PyTorch and `nvidia-smi` to monitor and load GPUs
- Easy to extend for your own keep-alive logic

---

## Installation

```bash
pip install keep-gpu
```

## Usage

### Use keep-gpu as a cli tool

```bash
keep-gpu
```

Specify the interval in microseconds between GPU usage checks (default is 300 seconds):
```bash
keep-gpu --interval 100
```

Specify GPU IDs to run on (default is all available GPUs):
```bash
keep-gpu --gpu-ids 0,1,2
```

### Use keep-gpu api in your code

Non-blocking gpu keeping logic with `CudaGPUController`:
```python
from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
ctrl = CudaGPUController(rank=0, interval=0.5)
# occupy GPU while you do CPU-only work
# this is non-blocking
ctrl.keep()
dataset.process()
ctrl.release()        # give GPU memory back
model.train_start()   # now run real GPU training
```

Use `CudaGPUController` as a context manager:
```python
from keep_gpu.single_gpu_controller.cuda_gpu_controller import CudaGPUController
with CudaGPUController(rank=0, interval=0.5):
    dataset.process()  # GPU occupied inside this block
model.train_start()    # GPU free after exiting block
```

## Credits

This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template.

## Contributors

<!-- google-doc-style-ignore -->
<a href="https://github.com/Wangmerlyn/KeepGPU/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=Wangmerlyn/KeepGPU" />
</a>
<!-- google-doc-style-resume -->
