Metadata-Version: 2.4
Name: ewe-gate
Version: 0.1.0
Summary: Epistemic Weight Engine — Pre-update gating for noise-robust AI learning
Home-page: https://github.com/maheeppurohit/epistemic-weight-engine
Author: Maheep Purohit
Author-email: Maheep Purohit <purohitmaheep@gmail.com>
Project-URL: Homepage, https://github.com/maheeppurohit/epistemic-weight-engine
Project-URL: Paper, https://doi.org/10.5281/zenodo.18940011
Keywords: machine learning,deep learning,noisy labels,robust learning,pytorch
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: torch>=2.0.0
Requires-Dist: numpy>=1.24.0
Dynamic: author
Dynamic: home-page
Dynamic: requires-python

# ewe-gate

**Epistemic Weight Engine** — A plug-and-play pre-update gating mechanism for noise-robust AI learning.

[![PyPI](https://img.shields.io/pypi/v/ewe-gate)](https://pypi.org/project/ewe-gate/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Paper](https://img.shields.io/badge/Paper-ACM%20TIST-blue)](https://doi.org/10.5281/zenodo.18940011)

---

## What is EWE?

Current AI systems treat every training sample equally — regardless of whether the information is reliable, novel, or approval-biased.

**EWE** adds a gate before every parameter update that asks three questions:

| Layer | Signal | Question |
|---|---|---|
| I(x) | Gradient magnitude | Does this sample actually matter? |
| R(x) | Evidence vs approval | Is this label reliable or just popular? |
| P(x) | Loss deviation | Is this genuinely new information? |

Only samples that pass the gate trigger a parameter update. Everything else is skipped.

**Result on CIFAR-10N with 40% real human label noise:**

| Method | Accuracy |
|---|---|
| Standard Training | 72.37% |
| **EWE (ours)** | **79.36% (+6.99%)** |
| **EWE + GCE (ours)** | **84.33% (+11.96%)** |

---

## Install

```bash
pip install ewe-gate
```

---

## Quick Start

### Option 1 — Just the gate (most flexible)

```python
import torch
import torch.nn as nn
from ewe import EWEGate

model = YourModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
criterion = nn.CrossEntropyLoss(reduction='none')
gate = EWEGate()

for epoch in range(50):
    for x, y in dataloader:
        optimizer.zero_grad()
        outputs = model(x)
        losses = criterion(outputs, y)

        # EWE filters which samples update parameters
        filtered_loss = gate.filter_losses(losses, outputs.detach())
        if filtered_loss is not None:
            filtered_loss.backward()
            optimizer.step()

    print(f"Epoch {epoch} | Accept rate: {gate.acceptance_rate:.1%}")
```

### Option 2 — EWETrainer (simplest)

```python
from ewe import EWETrainer
import torch.nn as nn

trainer = EWETrainer(
    model=model,
    optimizer=optimizer,
    criterion=nn.CrossEntropyLoss(reduction='none'),
)

for epoch in range(50):
    loss, acc, rate = trainer.train_epoch(train_loader)
    val_acc = trainer.evaluate(val_loader)
    print(f"Epoch {epoch} | Loss: {loss:.3f} | Acc: {acc:.1f}% | Accept: {rate:.1%}")
```

### Option 3 — EWE + GCE combined (best results)

```python
from ewe import EWEGate
from ewe import GCELoss

gate = EWEGate()
criterion = GCELoss(q=0.7)  # Noise-robust loss

for x, y in dataloader:
    optimizer.zero_grad()
    outputs = model(x)
    losses = criterion(outputs, y, reduction='none')

    filtered_loss = gate.filter_losses(losses, outputs.detach())
    if filtered_loss is not None:
        filtered_loss.backward()
        optimizer.step()
```

---

## Configuration

```python
gate = EWEGate(
    alpha=0.45,      # Weight for Impact module
    beta=0.40,       # Weight for Reality Alignment (most important)
    gamma=0.15,      # Weight for Paradigm Shift
    k=0.25,          # Gate sensitivity (higher = stricter)
    lam=0.5,         # Approval penalty strength
    ema_decay=0.99,  # Loss baseline decay
)
```

**Key parameter: `k`**
- `k=0.10` → ~70% acceptance (loose gate)
- `k=0.25` → ~60% acceptance (default, balanced)
- `k=0.50` → ~50% acceptance (strict gate)

---

## Monitor the gate

```python
gate = EWEGate()

# After training
print(f"Acceptance rate: {gate.acceptance_rate:.1%}")
print(f"Suppression rate: {gate.suppression_rate:.1%}")

# Reset stats between experiments
gate.reset_stats()
```

---

## Available loss functions

```python
from ewe import GCELoss, LabelSmoothingLoss

# Generalised Cross-Entropy (Zhang & Sabuncu 2018)
criterion = GCELoss(q=0.7)

# Label Smoothing (Szegedy et al. 2016)  
criterion = LabelSmoothingLoss(smoothing=0.1)
```

---

## Citation

If you use EWE in your research please cite:

```bibtex
@article{purohit2026ewe,
  title={Epistemic Weight Engine (EWE): A Framework for Signal-Reliability-Weighted
         Learning in Artificial Neural Systems, with Multi-Dataset Experimental Evaluation},
  author={Purohit, Maheep},
  journal={ACM Transactions on Intelligent Systems and Technology},
  year={2026},
  note={Manuscript ID: TIST-2026-03-0289},
  doi={10.5281/zenodo.18940011}
}
```

---

## Paper and Code

- **Paper (preprint):** https://doi.org/10.5281/zenodo.18940011
- **Experiments:** https://github.com/maheeppurohit/epistemic-weight-engine
- **Contact:** purohitmaheep@gmail.com

---

## Author

**Maheep Purohit**
Independent Researcher, Bikaner, Rajasthan, India
Patent Applicant: Adaptive Intelligent Pipeline Integrity System (filed 2025)

*This research was conducted entirely independently without institutional affiliation, laboratory access, external funding, or academic supervision.*
