Metadata-Version: 2.4
Name: npmlp-core
Version: 1.0.0
Summary: A modular Deep Learning framework built from scratch using NumPy
Author-email: kipngeno koech <dev@kipngenokoech.com>
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: numpy>=1.21.0
Requires-Dist: scipy>=1.7.0

# npmlp-core: Modular Deep Learning Framework from Scratch

A lightweight, modular deep learning library built entirely in NumPy. This project implements the core components of modern neural networks, including backpropagation, vectorized optimization, and advanced regularization techniques, without the use of high-level frameworks like PyTorch or TensorFlow.

## Installation

```bash
pip install npmlp-core
```

## Quick Start

```python
import numpy as np
from mytorch.models import MLP1
from mytorch.optim import SGD
from mytorch.nn import CrossEntropyLoss

# Create a model (input: 784, output: 10 classes)
model = MLP1(784, 10)

# Set up optimizer and loss
optimizer = SGD(model.parameters(), lr=0.01, momentum=0.9)
criterion = CrossEntropyLoss()

# Training step
x = np.random.randn(32, 784)  # batch of 32
y = np.random.randint(0, 10, 32)  # labels

# Forward pass
output = model.forward(x)
loss = criterion.forward(output, y)

# Backward pass
grad = criterion.backward()
model.backward(grad)

# Update weights
optimizer.step()
optimizer.zero_grad()
```

## Key Features

- **Core Layers**: Fully functional Linear layers with comprehensive backpropagation.
- **Advanced Activations**: Implementation of ReLU, Sigmoid, Tanh, and modern industry-standard functions like GELU (Gaussian Error Linear Units) and Swish.
- **Regularization**: Vectorized BatchNorm1d with training/inference mode logic and running statistics.
- **Optimizers**: Stochastic Gradient Descent (SGD) with configurable Momentum.
- **Loss Functions**: MSELoss and CrossEntropyLoss with numerically stable Softmax integration.
- **Architectures**: Pre-configured Multi-Layer Perceptron (MLP) models ranging from shallow (MLP0) to deep (MLP4) configurations.

## Technical Highlights

### 1. Vectorized Backpropagation

Every layer in this library implements its own forward and backward pass. The gradients are calculated using vectorized matrix calculus, ensuring high performance on the CPU.

### 2. Numerical Stability

The Softmax and CrossEntropy implementations include numerical stability techniques (such as the max-subtraction method) to prevent floating-point overflow during exponentiation.

### 3. Training vs. Inference Mode

The BatchNorm1d layer manages global running means and variances using exponential moving averages (momentum), ensuring the models perform accurately during evaluation on single samples.

## Project Structure

```text
mlp-core/
├── src/
│   └── mytorch/
│       ├── nn/         # Layers (Linear, BatchNorm) and Activations
│       ├── optim/      # SGD with Momentum
│       ├── models/     # MLP Architectures
│       └── loss.py     # MSE and Cross-Entropy
├── requirements.txt    # Minimal dependencies (NumPy, SciPy)
└── README.md
