Metadata-Version: 2.4
Name: eml-cost-torch
Version: 0.1.0a0
Summary: PyTorch profiler plugin for Pfaffian depth/width per layer.
Author: Monogate Research
License: PROPRIETARY-PRE-RELEASE
Keywords: pytorch,profiler,neural-network,pfaffian,complexity
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: eml-cost>=0.1.0a0
Requires-Dist: sympy>=1.12
Requires-Dist: torch>=2.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: mypy>=1.5; extra == "dev"
Dynamic: license-file

# eml-cost-torch

**Pre-release. Patent pending.** Not for redistribution.

PyTorch profiler plugin: per-layer Pfaffian profile of a `torch.nn.Module`.

## Status

Pre-release. Covered by Monogate Research patents #11 and #12.

## Quick start

```python
import torch.nn as nn
from eml_cost_torch import profile

model = nn.Sequential(
    nn.Linear(128, 64),
    nn.ReLU(),
    nn.Linear(64, 32),
    nn.GELU(),
    nn.Linear(32, 10),
)

p = profile(model)

print(p.total_layers)                  # 5
print(p.total_pfaffian_depth)          # 0 — all r=0 (ReLU=0, GELU=non-EML, Linear=0)
print(p.transcendental_layer_count)    # 0
print(p.non_eml_layer_count)           # 1 — GELU uses erf
print(p.estimated_pfaffian_width)      # 0 — no softmax/attention

for layer in p.layers:
    print(f"  {layer.name:8s}  {layer.activation:30s}  r={layer.pfaffian_r}")
```

## What it does

Walks the module graph statically (does NOT execute the model), classifies
each layer against an internal registry of activation/operator types, and
returns a structured profile.

Registry covers ~50 standard `torch.nn` modules:
- Linear / Conv / Norm: `r=0` (polynomial)
- ReLU family / Hard sigmoid / Hard swish: `r=0`
- Sigmoid, Tanh, Softplus, ELU, SiLU/Swish: `r=1`
- Mish: `r=3`
- GELU: flagged `is_pfaffian_not_eml=True` (uses erf, outside EML class)
- Softmax / MultiheadAttention: contributes to `estimated_pfaffian_width`

## Why this matters for architecture search

Per the research validating patent #11: NN training cost correlates with
Pfaffian width. This profile gives a static input you can use as a search
heuristic before training.

## Library API

```python
from eml_cost_torch import profile, ModelProfile, LayerProfile

p: ModelProfile = profile(model)

# Aggregate fields
p.layers                       # list[LayerProfile]
p.total_layers                 # int
p.total_pfaffian_depth         # sum of r over all layers
p.total_eml_depth              # sum of depth
p.transcendental_layer_count   # count of layers with r >= 1
p.non_eml_layer_count          # count of layers using non-EML primitives (e.g., GELU)
p.estimated_pfaffian_width     # parallel-chain count (softmax + attention)
p.total_params                 # parameter count

# Per-layer fields
layer.name                  # named_modules path
layer.cls_name              # Python class name
layer.activation            # friendly description
layer.pfaffian_r            # chain order
layer.eml_depth             # routing depth
layer.is_pfaffian_not_eml   # True for GELU and similar
layer.n_params              # parameters at this layer
```

## License

`PROPRIETARY-PRE-RELEASE`. See LICENSE.
