Metadata-Version: 2.1
Name: Tenminator2
Version: 2.0.0
Summary: Tenminator2 — Framework de Deep Learning Ultraligero
Home-page: https://github.com/yoqer/TenMiNaTor
Author: yoqer
Author-email: 
License: MIT
Project-URL: Homepage, https://github.com/yoqer/TenMiNaTor
Project-URL: Bug Tracker, https://github.com/yoqer/TenMiNaTor/issues
Keywords: deep learning,machine learning,neural networks,transformer,llm,quantization
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.20.0
Provides-Extra: numba
Requires-Dist: numba>=0.55.0; extra == "numba"
Provides-Extra: jax
Requires-Dist: jax>=0.4.0; extra == "jax"
Requires-Dist: jaxlib>=0.4.0; extra == "jax"
Provides-Extra: cupy
Requires-Dist: cupy>=11.0.0; extra == "cupy"
Provides-Extra: scipy
Requires-Dist: scipy>=1.7.0; extra == "scipy"
Provides-Extra: all
Requires-Dist: numba>=0.55.0; extra == "all"
Requires-Dist: scipy>=1.7.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=3.0.0; extra == "dev"
Requires-Dist: black>=22.0.0; extra == "dev"
Requires-Dist: build>=0.10.0; extra == "dev"
Requires-Dist: twine>=4.0.0; extra == "dev"

# Tenminator 2 

**Framework de Deep Learning Ultraligero** — `pip install Tenminator2`

Framework de entrenamiento e inferencia de modelos de lenguaje, diseñado para dispositivos con recursos limitados.

---

## Instalación

```bash
pip install Tenminator2
# Con soporte numba (aceleración CPU):
pip install Tenminator2[numba]
# Con soporte JAX:
pip install Tenminator2[jax]
```

---

## Inicio Rápido

```python
import Tenminator2 as tm

# Crear un modelo simple
model = tm.Sequential(
    tm.Linear(784, 256),
    tm.ReLU(),
    tm.Dropout(p=0.2),
    tm.Linear(256, 10),
)

# Tensor de entrada
x = tm.Tensor([[1.0, 2.0, 3.0]])
out = model(x)
print(out.shape)
```

---

## Módulos Disponibles

| Módulo | Descripción |
|--------|-------------|
| `Tensor` | Tensor con soporte de autograd |
| `Linear` | Capa lineal (fully connected) |
| `Embedding` | Tabla de embeddings |
| `LayerNorm` | Normalización de capa |
| `RMSNorm` | Root Mean Square Normalization |
| `ReLU` | Activación ReLU |
| `GELU` | Activación GELU |
| `SiLU` | Activación SiLU / Swish |
| `Dropout` | Regularización por dropout |
| `Sequential` | Contenedor secuencial |
| `MultiHeadAttention` | Atención multi-cabeza |
| `TransformerBlock` | Bloque Transformer completo |
| `CrossEntropyLoss` | Pérdida entropía cruzada |
| `MSELoss` | Error cuadrático medio |
| `BCELoss` | Entropía cruzada binaria |

---

## Optimizadores

```python
optimizer = tm.Adam(model.parameters(), lr=1e-3)
optimizer.step()
optimizer.zero_grad()
```

| Optimizador | Descripción |
|-------------|-------------|
| `SGD` | Descenso de gradiente estocástico |
| `Adam` | Adam con corrección de sesgo |
| `AdamW` | Adam con weight decay desacoplado |

---

## Backends

```python
tm.set_backend("numpy")   # Por defecto
tm.set_backend("numba")   # Aceleración CPU
tm.set_backend("jax")     # JAX/XLA
tm.print_backend_info()
```

---

## Transformer

```python
block = tm.TransformerBlock(
    embed_dim=512,
    num_heads=8,
    ff_dim=2048,
    dropout=0.1,
    norm_type="layer",  # o "rms" para RMSNorm
)
```

---

## Steering y Corrección de Sesgos

```python
import numpy as np

# Aplicar steering vector a una capa
sv = np.random.randn(256)
with tm.SteeringHook(model._modules["0"], sv, strength=0.1):
    output = model(x)

# Corrección de sesgos durante inferencia
cv = np.zeros(256)
with tm.BiasCorrector(model._modules["0"], cv, strength=0.05):
    prediction = model(input_data)
```

---

## Entrenamiento

```python
config = tm.TrainingConfig(
    max_iterations=100,
    early_stop_patience=12,
    checkpoint_dir="./checkpoints",
)

controller = tm.TrainingController(model, optimizer, loss_fn, config)

for data in dataloader:
    if not controller.should_continue():
        break
    loss = train_step(data)
    controller.update(loss)
```

---

## CLI

```bash
Tenminator2 info
Tenminator2 train --data data.csv --config config.json
Tenminator2 evaluate --model checkpoint.pth --data test.csv
```

---

## Licencia

MIT — [github.com/yoqer/Tenminator2](https://github.com/yoqer/Tenminator2)
