Metadata-Version: 2.4
Name: gpu-memory-profiler
Version: 0.2.2
Summary: A comprehensive GPU memory profiler for PyTorch and TensorFlow with CLI, visualization, and analytics
Author-email: Silas Asamoah <silasbempong@gmail.com>, Prince Agyei Tuffour <prince.agyei.tuffour@gmail.com>
Maintainer-email: Silas Asamoah <silasbempong@gmail.com>, Prince Agyei Tuffour <prince.agyei.tuffour@gmail.com>, Derrick Dwamena <derrickasante07@gmail.com>
License: MIT License
        
        Copyright (c) 2025 GPU Memory Profiler Team
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/Silas-Asamoah/gpu-memory-profiler
Project-URL: Documentation, https://github.com/Silas-Asamoah/gpu-memory-profiler/tree/main/docs
Project-URL: Repository, https://github.com/Silas-Asamoah/gpu-memory-profiler.git
Project-URL: Bug Tracker, https://github.com/Silas-Asamoah/gpu-memory-profiler/issues
Project-URL: Release Notes, https://github.com/Silas-Asamoah/gpu-memory-profiler/blob/main/CHANGELOG.md
Keywords: gpu,memory,profiler,pytorch,tensorflow,deep-learning,monitoring
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.19.0
Requires-Dist: pandas>=1.2.0
Requires-Dist: psutil>=5.8.0
Requires-Dist: scipy>=1.7.0
Provides-Extra: viz
Requires-Dist: matplotlib>=3.3.0; extra == "viz"
Requires-Dist: seaborn>=0.11.0; extra == "viz"
Requires-Dist: plotly>=5.0.0; extra == "viz"
Requires-Dist: dash>=2.0.0; extra == "viz"
Requires-Dist: dash-bootstrap-components>=1.6.0; extra == "viz"
Provides-Extra: torch
Requires-Dist: torch>=1.8.0; extra == "torch"
Provides-Extra: tf
Requires-Dist: tensorflow>=2.4.0; extra == "tf"
Provides-Extra: all
Requires-Dist: torch>=1.8.0; extra == "all"
Requires-Dist: tensorflow>=2.4.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=8.0.0; extra == "dev"
Requires-Dist: pytest-cov>=2.10.0; extra == "dev"
Requires-Dist: pytest-mock>=3.6.0; extra == "dev"
Requires-Dist: pytest-xdist>=2.4.0; extra == "dev"
Requires-Dist: pexpect>=4.9.0; extra == "dev"
Requires-Dist: pytest-textual-snapshot>=1.1.0; extra == "dev"
Requires-Dist: jsonschema>=4.0.0; extra == "dev"
Requires-Dist: black>=21.0.0; extra == "dev"
Requires-Dist: flake8>=3.8.0; extra == "dev"
Requires-Dist: mypy>=0.910; extra == "dev"
Requires-Dist: isort>=5.9.0; extra == "dev"
Requires-Dist: pre-commit>=2.15.0; extra == "dev"
Requires-Dist: sphinx>=4.0.0; extra == "dev"
Requires-Dist: sphinx-rtd-theme>=1.0.0; extra == "dev"
Requires-Dist: myst-parser>=0.15.0; extra == "dev"
Requires-Dist: jupyter>=1.0.0; extra == "dev"
Requires-Dist: ipython>=7.0.0; extra == "dev"
Requires-Dist: notebook>=6.4.0; extra == "dev"
Requires-Dist: coverage>=5.5.0; extra == "dev"
Requires-Dist: tox>=3.24.0; extra == "dev"
Requires-Dist: memory-profiler>=0.60.0; extra == "dev"
Requires-Dist: line-profiler>=3.3.0; extra == "dev"
Provides-Extra: test
Requires-Dist: pytest>=8.0.0; extra == "test"
Requires-Dist: pytest-cov>=2.10.0; extra == "test"
Requires-Dist: pytest-mock>=3.6.0; extra == "test"
Requires-Dist: pytest-xdist>=2.4.0; extra == "test"
Requires-Dist: pexpect>=4.9.0; extra == "test"
Requires-Dist: pytest-textual-snapshot>=1.1.0; extra == "test"
Requires-Dist: jsonschema>=4.0.0; extra == "test"
Requires-Dist: coverage>=5.5.0; extra == "test"
Requires-Dist: numpy>=1.19.0; extra == "test"
Requires-Dist: pandas>=1.2.0; extra == "test"
Requires-Dist: scipy>=1.7.0; extra == "test"
Requires-Dist: memory-profiler>=0.60.0; extra == "test"
Requires-Dist: line-profiler>=3.3.0; extra == "test"
Provides-Extra: tui
Requires-Dist: textual>=0.57.0; extra == "tui"
Requires-Dist: pyfiglet>=1.0.2; extra == "tui"
Provides-Extra: docs
Requires-Dist: sphinx>=4.0.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=1.0.0; extra == "docs"
Requires-Dist: myst-parser>=0.15.0; extra == "docs"
Dynamic: license-file

# GPU Memory Profiler

[![Build Status](https://img.shields.io/badge/build-passing-brightgreen)](https://github.com/Silas-Asamoah/gpu-memory-profiler/actions)
[![PyPI Version](https://img.shields.io/pypi/v/gpu-memory-profiler.svg)](https://pypi.org/project/gpu-memory-profiler/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![PyTorch](https://img.shields.io/badge/PyTorch-1.8+-red.svg)](https://pytorch.org/)
[![TensorFlow](https://img.shields.io/badge/TensorFlow-2.4+-orange.svg)](https://tensorflow.org/)
[![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](CONTRIBUTING.md)
[![Textual TUI](https://img.shields.io/badge/TUI-Textual-blueviolet)](docs/tui.md)
[![Prompt%20Toolkit](https://img.shields.io/badge/Prompt--toolkit-roadmap-lightgrey)](docs/tui.md#prompt-toolkit-roadmap)

<p align="center">
  <img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-overview.gif" alt="GPU Profiler TUI Demo" width="900">
  <br/>
  <em>Interactive Textual dashboard with live monitoring, visualizations, and CLI automation.</em>
</p>

A production-ready, open source tool for real-time GPU memory profiling, leak detection, and optimization in PyTorch and TensorFlow deep learning workflows.

## Why use GPU Memory Profiler?

-   **Prevent Out-of-Memory Crashes**: Catch memory leaks and inefficiencies before they crash your training.
-   **Optimize Model Performance**: Get actionable insights and recommendations for memory usage.
-   **Works with PyTorch & TensorFlow**: Unified interface for both major frameworks.
-   **Beautiful Visualizations**: Timeline plots, heatmaps, and interactive dashboards.
-   **CLI & API**: Use from Python or the command line.

## Features

-   Real-time GPU memory monitoring
-   Memory leak detection & alerts
-   Interactive and static visualizations
-   Context-aware profiling (decorators, context managers)
-   CLI tools for automation
-   Data export (CSV, JSON)
-   CPU compatibility mode

## Installation

### From PyPI

Package page: <https://pypi.org/project/gpu-memory-profiler/>

```bash
# Basic installation
pip install gpu-memory-profiler

# With visualization support
pip install gpu-memory-profiler[viz]

# With optional dependencies
pip install gpu-memory-profiler[torch]  # PyTorch support
pip install gpu-memory-profiler[tf]     # TensorFlow support
pip install gpu-memory-profiler[all]    # Both frameworks
pip install gpu-memory-profiler[dev]    # Development tools
pip install gpu-memory-profiler[test]   # Testing dependencies
pip install gpu-memory-profiler[docs]   # Documentation tools
```

### From Source

```bash
git clone https://github.com/Silas-Asamoah/gpu-memory-profiler.git
cd gpu-memory-profiler

# Install in development mode
pip install -e .

# Install with visualization support
pip install -e .[viz]

# Install framework extras
pip install -e .[torch]
pip install -e .[tf]
pip install -e .[all]

# Install with development dependencies
pip install -e .[dev]

# Install with testing dependencies
pip install -e .[test]
```

### Development Setup

```bash
# Clone and setup development environment
git clone https://github.com/Silas-Asamoah/gpu-memory-profiler.git
cd gpu-memory-profiler
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e .[dev,test]
# Optional: include framework extras for integration tests
pip install -e .[dev,test,all]
pre-commit install
```

**Note**: Black formatting check is temporarily disabled in CI. Code formatting will be addressed in a separate PR.

## Quick Start

### PyTorch Example

```python
from gpumemprof import GPUMemoryProfiler
profiler = GPUMemoryProfiler()

def train_step(model, data, target):
    output = model(data)
    loss = ...
    loss.backward()
    return loss

profile = profiler.profile_function(train_step, model, data, target)
summary = profiler.get_summary()
print(f"Profiled call: {profile.function_name}")
print(f"Peak memory: {summary['peak_memory_usage'] / (1024**3):.2f} GB")
```

### TensorFlow Example

```python
from tfmemprof import TFMemoryProfiler
profiler = TFMemoryProfiler()
with profiler.profile_context("training"):
    model.fit(x_train, y_train, epochs=5)
results = profiler.get_results()
print(f"Peak memory: {results.peak_memory_mb:.2f} MB")
```

## Documentation

Start at the docs home page and follow the same structure locally or when hosted:

-   **[Documentation Home (local)](docs/index.md)**
-   **[Documentation Home (hosted)](https://gpu-memory-profiler.readthedocs.io/en/latest/)**

Key guides:
-   [CLI Usage](docs/cli.md)
-   [CPU Compatibility](docs/cpu_compatibility.md)
-   [Compatibility Matrix (v0.2)](docs/compatibility_matrix.md)
-   [GPU Setup (drivers + frameworks)](docs/gpu_setup.md)
-   [Testing Guides](docs/pytorch_testing_guide.md), [TensorFlow](docs/tensorflow_testing_guide.md)
-   [Example Test Guides (Markdown)](docs/examples/test_guides/README.md)
-   [Terminal UI (Textual)](docs/tui.md)
-   [In-depth Article](docs/article.md)
-   [Example scripts](examples/basic)
-   [Launch scenario scripts](examples/scenarios)

## Launch QA Scenarios (CPU + MPS + Telemetry + OOM)

Run the capability matrix for a launch-oriented smoke pass:

```bash
python -m examples.cli.capability_matrix --mode smoke --target both --oom-mode simulated
```

Run the full matrix (includes extra demos):

```bash
python -m examples.cli.capability_matrix --mode full --target both --oom-mode simulated
```

Key scenario modules:

```bash
python -m examples.scenarios.cpu_telemetry_scenario
python -m examples.scenarios.mps_telemetry_scenario
python -m examples.scenarios.oom_flight_recorder_scenario --mode simulated
python -m examples.scenarios.tf_end_to_end_scenario
```

## Terminal UI

Prefer an interactive dashboard? Install the optional TUI dependencies and
launch the Textual interface:

```bash
pip install "gpu-memory-profiler[tui]"
gpu-profiler
```

The TUI surfaces system info, PyTorch/TensorFlow quick actions, and CLI tips.
Future prompt_toolkit enhancements will add a command palette for advanced
workflows—see [docs/tui.md](docs/tui.md) for details.

<p align="center">
  <img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-1.png" alt="GPU Profiler Overview" width="700">
  <br/>
  <em>Overview, PyTorch, and TensorFlow tabs inside the Textual dashboard.</em>
</p>

<p align="center">
  <img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-2.png" alt="GPU Profiler CLI Actions" width="700">
  <br/>
  <em>CLI & Actions tab with quick commands, loaders, and log output.</em>
</p>

Need charts without leaving the terminal? The new **Visualizations** tab renders
an ASCII timeline from the live tracker and can export the same data to PNG
(Matplotlib) or HTML (Plotly) under `./visualizations` for deeper inspection.
Just start tracking, refresh the tab, and hit the export buttons.

The PyTorch and TensorFlow tabs now surface recent decorator/context profiling
results as live tables—with refresh/clear controls—so you can review peak
memory, deltas, and durations gathered via `gpumemprof.context_profiler` or
`tfmemprof.context_profiler` without leaving the dashboard.

When the monitoring session is running you can also dump every tracked event to
`./exports/tracker_events_<timestamp>.{csv,json}` directly from the Monitoring
tab, making it easy to feed the same data into pandas, spreadsheets, or external
dashboards.

Need tighter leak warnings? Adjust the warning/critical sliders in the same tab
to update GPU `MemoryTracker` thresholds on the fly, and use the inline alert
history to review exactly when spikes occurred.

Need to run automation without opening another terminal? Use the CLI tab’s
command input (or quick action buttons) to execute `gpumemprof` /
`tfmemprof` commands in-place, trigger `gpumemprof diagnose`, run the OOM
flight-recorder scenario, and launch the capability-matrix smoke checks with a
single click.

## CPU Compatibility

Working on a laptop or CI agent without CUDA? The CLI, Python API, and TUI now
fall back to a psutil-powered `CPUMemoryProfiler`/`CPUMemoryTracker`. Run the
same `gpumemprof monitor` / `gpumemprof track` commands and you’ll see RSS data
instead of GPU VRAM, exportable to CSV/JSON and viewable inside the monitoring
tab. PyTorch sample workloads automatically switch to CPU tensors when CUDA
isn’t present, so every workflow stays accessible regardless of hardware.

## Contributing

We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) and [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md).

## License

[MIT License](LICENSE)

---

**Version:** 0.2.0 (launch candidate)
