Metadata-Version: 2.4
Name: ng-model-gym
Version: 0.3.0
Summary: Neural Graphics Model Gym for training and evaluating neural graphics models.
Project-URL: Homepage, https://www.arm.com
Project-URL: Repository, https://github.com/arm/neural-graphics-model-gym
Author: Arm Limited
License-Expression: Apache-2.0
License-File: LICENSE.md
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Utilities
Requires-Python: <3.13,>=3.10
Requires-Dist: ai-ml-sdk-model-converter==0.9.0
Requires-Dist: click==8.1.8
Requires-Dist: executorch==1.2.0
Requires-Dist: gpustat==1.1.1
Requires-Dist: gputil==1.4.0
Requires-Dist: huggingface-hub==0.34.3
Requires-Dist: lpips==0.1.4
Requires-Dist: memory-profiler==0.61.0
Requires-Dist: numpy==2.1.3
Requires-Dist: openexr==3.4.4
Requires-Dist: psutil==7.2.1
Requires-Dist: pydantic==2.12.5
Requires-Dist: pyiqa==0.1.14.1
Requires-Dist: rich==13.9.3
Requires-Dist: safetensors==0.7.0
Requires-Dist: setuptools==80.10.2
Requires-Dist: slangtorch==1.3.19
Requires-Dist: tensorboard==2.20.0
Requires-Dist: torch==2.11.0
Requires-Dist: torchao==0.17.0
Requires-Dist: torcheval==0.0.7
Requires-Dist: torchmetrics==1.8.2
Requires-Dist: torchvision==0.26.0
Requires-Dist: tosa-tools==2026.2.1
Requires-Dist: tqdm==4.67.1
Requires-Dist: typer==0.15.4
Requires-Dist: typing-extensions~=4.4
Provides-Extra: dev
Requires-Dist: autoflake==2.3.1; extra == 'dev'
Requires-Dist: bandit==1.7.6; extra == 'dev'
Requires-Dist: black==23.12.1; extra == 'dev'
Requires-Dist: blocklint==0.2.4; extra == 'dev'
Requires-Dist: coverage==7.3.2; extra == 'dev'
Requires-Dist: expecttest==0.3.0; extra == 'dev'
Requires-Dist: hatch==1.16.2; extra == 'dev'
Requires-Dist: isort==5.13.2; extra == 'dev'
Requires-Dist: jsonref==1.1.0; extra == 'dev'
Requires-Dist: pre-commit==3.6.0; extra == 'dev'
Requires-Dist: pylint==3.0.1; extra == 'dev'
Requires-Dist: pytest-cov==5.0.0; extra == 'dev'
Requires-Dist: pytest==8.4.2; extra == 'dev'
Requires-Dist: reuse==3.0.1; extra == 'dev'
Provides-Extra: static-analysis
Requires-Dist: autoflake==2.3.1; extra == 'static-analysis'
Requires-Dist: bandit==1.7.6; extra == 'static-analysis'
Requires-Dist: black==23.12.1; extra == 'static-analysis'
Requires-Dist: blocklint==0.2.4; extra == 'static-analysis'
Requires-Dist: isort==5.13.2; extra == 'static-analysis'
Requires-Dist: pylint==3.0.1; extra == 'static-analysis'
Requires-Dist: reuse==3.0.1; extra == 'static-analysis'
Description-Content-Type: text/markdown

<!---
SPDX-FileCopyrightText: Copyright 2024-2026 Arm Limited and/or its affiliates <open-source-office@arm.com>
SPDX-License-Identifier: Apache-2.0
--->
<h1 align="center">Neural Graphics Model Gym</h1>
<p align="center">
<a href="https://huggingface.co/Arm/neural-super-sampling"><img alt="Model Card" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-neural%20super%20sampling%20-blue"></a>
<a href="https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/LICENSE.md"><img alt="License" src="https://img.shields.io/badge/license-Apache%20License%202.0-green"></a>
<img alt="Python versions" src="https://img.shields.io/badge/python-3.10%20%7C%203.11%20%7C%203.12-blue">
<img alt="Neural Graphics Model Gym CLI help output" src="https://raw.githubusercontent.com/arm/neural-graphics-model-gym/v0.3.0/docs/ng-model-gym-cli-hero-img.png" width="665" height="284">
</p>

> **NOTE**:
> Please be aware that this is a beta release. Beta means that the product may not be functionally or feature complete. At this early phase the product is not yet expected to fully meet the quality, testing or performance requirements of a full release. These aspects will evolve and improve over time, up to and beyond the full release. We welcome your feedback.

## Table of contents

1. [Introduction](#introduction)
2. [Quick Start](#quick-start)
    * [Prerequisites](#prerequisites)
    * [Setup](#setup)
    * [Command line usage](#command-line-usage)
    * [Usage as a Python® package](#usage-as-a-python-package)
3. [Monitoring and profiling](#monitoring-and-profiling)
4. [Logging](#logging)
5. [Testing](#testing)
6. [Adding custom models, datasets, and usecases](#adding-custom-models-datasets-and-usecases)
7. [Generating new training data](#generating-new-training-data)
8. [Troubleshooting](#troubleshooting)
9. [Code contributions](#code-contributions)
10. [Security](#security)
11. [License](#license)
12. [Trademarks and copyrights](#trademarks-and-copyrights)


## Introduction

**Neural Graphics Model Gym**  is a Python® toolkit for developing real-time Neural Graphics machine learning models.

With **Neural Graphics Model Gym** you can train, finetune and evaluate your Neural Graphics models.
**Neural Graphics Model Gym** also enables you to perform quantization of your model before exporting it to a format compatible with ML extensions for Vulkan® - allowing you to run on the latest mobile devices.

Currently, we include the following Neural Graphics use cases:

* Neural Super Sampling (NSS)
  * NSS allows for high-fidelity, real-time graphics in game engines. By feeding low-resolution frames, along with spatial and motion information, into a neural network we are able to construct high-resolution frames that suffer no loss in quality.
* Neural Frame Rate Upscaling (NFRU)
  * NFRU allows for higher frame-rate real-time graphics in game engines. By feeding low-frame-rate frames, along with spatial and motion information, into a neural network we are able to construct intermediate frames that increase the output frame rate.


## Quick Start

### Prerequisites

To build and run Neural Graphics Model Gym, the following are required:

* Ubuntu® >= 22.04
  * Neural Graphics Model Gym has been tested on 22.04 LTS and 24.04 LTS, but should work on other Linux® distributions
* 3.10 <= Python < 3.13
* Python development package (e.g. `python3-dev`)
* NVIDIA® CUDA® capable GPU
* CUDA Toolkit v13.1.1 or later
* Git LFS

### Setup

1. Clone the repository:
```bash
git clone https://github.com/arm/neural-graphics-model-gym.git
```

2. Install the project:
```bash
pip install .
```
For more details including how to install in development mode and how to run using Docker see [setup.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/setup.md).

### Usage

Neural Graphics Model Gym can be used either as a command line tool or as a package which may be imported into a Python application.

Basic usage is shown here. More detailed commands can be found in [usage.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/usage.md).

#### Command line usage

List the available model configuration files:
```bash
ng-model-gym init --list
```
Running `ng-model-gym init` with no arguments also lists model configuration files.

Generate a JSON configuration file for a model. If no directory path is provided, files are saved to current directory.
```bash
ng-model-gym init <model-template> [save_dir]
```
This file contains configuration options for the different usage modes (training, evaluation, and exporting) and paths to local datasets. Some entries have placeholder values (e.g. "<...>"). Make sure to replace those with your own settings.

NFRU configurations can be generated with the `nfru` template:

```bash
ng-model-gym init nfru [save_dir]
```

> **For Windows users:**
>
> When editing `config.json`, Windows paths must either use forward slashes (`path/to/location`) or escaped backslashes (`path\\to\\location`). Single backslashes (e.g. `path\to\location`) are invalid JSON and will cause a `JSONDecodeError`.

Use your custom configuration when invoking CLI commands by providing its path with the `--config-path` or `-c` flag as shown below:

```bash
# Perform model training and evaluation
ng-model-gym --config-path=<path/to/config.json> train
# Evaluate a previously trained model
ng-model-gym -c <path/to/config/file> evaluate --model-path=<path/to/model.pt> --model-type=<fp32|qat_int8>
# Perform quantization aware training (QAT) and evaluation
ng-model-gym -c <path/to/config/file> qat
# Export a trained model to VGF file
ng-model-gym -c <path/to/config/file> export --model-path=<path/to/model.pt> --export-type=<fp32|qat_int8|ptq_int8>
```

The `--config-path` (or `-c`) flag is **required** when running the `train`, `qat`, `evaluate`, or `export` commands.
These commands will fail if a valid config file path is not provided.

If you would like to view and download the available pre-trained models, use the following commands:

```bash
# List downloadable models hosted on the configured repositories
ng-model-gym list-models

# Download a specific model to a directory of your choice
# ng-model-gym download <repo_name>/<file_name> <destination>
ng-model-gym download neural-super-sampling/nss_v0.1.0_fp32.pt ./myfolder
```

The remote string identifier (e.g. `@neural-super-sampling/nss_v0.1.0_fp32.pt`) can also be used directly to automatically fetch and use models when running certain CLI commands. See the commands in [usage.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/usage.md) for more details.

The complete list of CLI commands can be seen by running `ng-model-gym --help` and more detailed information about the commands can be found in [usage.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/usage.md).

#### Usage as a Python package

The second way to use Neural Graphics Model Gym is to import it as a Python package.

The following snippet shows how to use the package to generate a config, perform training, evaluation and exporting the model.

```python
import ng_model_gym as ngmg

# Generate config file in specified directory using the API or CLI
# Note: The config file must be filled in before use
ngmg.generate_config_file("nss", "/save/dir")
```

```python
import ng_model_gym as ngmg
from pathlib import Path

# Create a Config object using path to a configuration file
# and extract parameters from it.
config = ngmg.load_config_file(Path("/path/to/config/file"))

# Enable logging for ng_model_gym
ngmg.logging_config(config)

# Do training and evaluation.
trained_model_path = ngmg.do_training(config, ngmg.TrainEvalMode.FP32)
ngmg.do_evaluate(config, trained_model_path, ngmg.TrainEvalMode.FP32)

# Export the trained fp32 model to a VGF file.
ngmg.do_export(config, trained_model_path, export_type=ngmg.ExportType.FP32)
```

Jupyter® notebook tutorials on how to use the package, including:
* Training
* Quantization-aware training and exporting
* Evaluation
* Fine-tuning
* Adding a custom model

can be found in the [neural-graphics-model-gym-examples](https://github.com/arm/neural-graphics-model-gym-examples) repository.


## Monitoring and profiling

The following tools have been set up to track models during training and to capture performance profiles:
* [TensorBoard](https://www.tensorflow.org/tensorboard)
* Trace profiler
* GPU memory profiler

Their usage is demonstrated in [monitoring-and-profiling.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/monitoring-and-profiling.md).


## Logging

By default, logging is enabled and set to INFO mode, which will print helpful information during execution.
All logs will be written to an `output.log` file located within the output directory specified in the configuration file.
The logging mode is customizable by using flags with the `ng-model-gym` CLI command. See the options below for examples.

`--log-level=quiet` can be added to silence all logs, except errors.

```bash
ng-model-gym --log-level=quiet -c <path/to/config/file> train
```

`--log-level=debug` can be added to print even more information during the process.

```bash
ng-model-gym --log-level=debug -c <path/to/config/file> train
```

Logging can also be specified when importing the package as follows.

```python
import ng_model_gym as ngmg
from pathlib import Path

# Create a Config object using path to a configuration file
parameters = ngmg.load_config_file(Path("/path/to/config"))

# Enable logging for ng_model_gym
ngmg.logging_config(parameters)
```

## Testing

A collection of unit and integration tests are provided to ensure the functionality of Neural Graphics Model Gym.

Testing can be run using Hatch commands. First [install Hatch and create a dev environment](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/setup.md#dev-installation). This will install all the dependencies for Neural Graphics Model Gym, plus the additional dependencies required for testing. The list of testing commands can be found [here](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/testing.md).


## Adding custom models, datasets, and usecases

Neural Graphics Model Gym supports adding custom models and datasets, enabling their use across all workflows. Detailed documentation on how to implement this can be found in [custom-models-and-datasets.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/adding-custom-models-and-datasets.md#adding-custom-models-and-datasets).

We also support defining custom use cases to group together related models, datasets, configurations, and any additional required code. See [adding custom usecases](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/adding-custom-models-and-datasets.md#adding-custom-use-cases) to see the implementation guide.


## Generating new training data

To train the Neural Super Sampling model, you will first need to capture training data from your game engine in the format expected by the model. Information regarding the types of data to capture and how to convert your captured frames can be found [here](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/nss/nss_data_generation.md).

For Neural Frame Rate Upscaling training data, see [nfru_data_generation.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/nfru/nfru_data_generation.md).


## Troubleshooting

A list of common known issues and their workarounds can be found at [troubleshooting.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/troubleshooting.md)


## Code contributions

The Neural Graphics Model Gym project welcomes contributions. For more details on contributing to the project, please see [CONTRIBUTING.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/CONTRIBUTING.md).


## Security

Arm takes security issues seriously: please see [SECURITY.md](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/SECURITY.md) for more details.

After creating an [editable installation using Hatch](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/docs/setup.md#dev-installation), you can run the security vulnerabilities checker with the following command:
```bash
hatch run static-analysis:bandit-check
```

## License

Neural Graphics Model Gym is licensed under [Apache License 2.0](https://github.com/arm/neural-graphics-model-gym/blob/v0.3.0/LICENSE.md).


## Trademarks and copyrights

* Linux® is the registered trademark of Linus Torvalds in the U.S. and elsewhere.
* Python® is a registered trademark of the Python Software Foundation.
* Ubuntu® is a registered trademark of Canonical.
* Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein.
* NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.
* “Jupyter” and the Jupyter logos are trademarks or registered trademarks of LF Charities.
* Vulkan is a registered trademark and the Vulkan SC logo is a trademark of the Khronos Group Inc.
* Microsoft, Windows are trademarks of the Microsoft group of companies
