Metadata-Version: 2.4
Name: coreml-complexity-analyzer
Version: 0.1.0
Summary: FLOPS estimation, memory analysis, and per-layer profiling for Core ML models
Author: Devavarapu Yashwanth, Ireddi Rakshitha
License: BSD-3-Clause
Project-URL: Homepage, https://github.com/yaswanth169/coreml-complexity-analyzer
Project-URL: Repository, https://github.com/yaswanth169/coreml-complexity-analyzer
Project-URL: Issues, https://github.com/yaswanth169/coreml-complexity-analyzer/issues
Project-URL: Documentation, https://github.com/yaswanth169/coreml-complexity-analyzer#readme
Keywords: coreml,flops,model-analysis,complexity,apple,mlmodel,machine-learning,model-profiling,neural-network
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: coremltools>=7.0
Requires-Dist: numpy
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Dynamic: license-file

# coreml-complexity-analyzer

FLOPS estimation, memory analysis, and per-layer profiling for Core ML models.

[![PyPI version](https://badge.fury.io/py/coreml-complexity-analyzer.svg)](https://pypi.org/project/coreml-complexity-analyzer/)
[![License: BSD-3-Clause](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)

## Why This Exists

Before deploying Core ML models to Apple devices, developers need to understand:

- **Compute cost**: How many FLOPS/MACs does the model require?
- **Memory footprint**: How much memory for parameters and activations?
- **Bottleneck layers**: Which layers dominate compute and memory?

`coremltools` does not provide these analysis capabilities out of the box. This package fills that gap.

## Installation

```bash
pip install coreml-complexity-analyzer
```

Requirements: `coremltools >= 7.0`, `numpy`

## Quick Start

```python
import coremltools as ct
from coreml_complexity_analyzer import generate_report

model = ct.models.MLModel("my_model.mlpackage")
report = generate_report(model, "ResNet50")

print(f"Total GFLOPS: {report.total_gflops:.2f}")
print(f"Parameters: {report.parameters_millions:.2f}M")
print(f"Memory: {report.memory_breakdown.total_mb:.2f} MB")

# Full markdown report
print(report.to_markdown())
```

## API

### FLOPSAnalyzer

Compute floating-point operations for each layer in a model.

```python
from coreml_complexity_analyzer import FLOPSAnalyzer

analyzer = FLOPSAnalyzer(model)
total = analyzer.get_total_flops()
breakdown = analyzer.get_layer_breakdown()
by_type = analyzer.get_flops_by_op_type()
```

### MemoryEstimator

Estimate memory requirements (parameters + activations + overhead).

```python
from coreml_complexity_analyzer import MemoryEstimator

estimator = MemoryEstimator(model)
breakdown = estimator.estimate()
param_count = estimator.get_parameter_count()

print(f"Total: {breakdown.total_mb:.2f} MB")
print(f"Parameters: {breakdown.parameter_mb:.2f} MB")
```

### LayerProfiler

Detailed per-layer analysis combining FLOPS, memory, and shape information.

```python
from coreml_complexity_analyzer import LayerProfiler

profiler = LayerProfiler(model)
profiles = profiler.profile()

top_layers = profiler.get_top_layers(n=5, by="flops")
for layer in top_layers:
    print(f"{layer.name}: {layer.mflops:.2f} MFLOPS")
```

### generate_report

Generate comprehensive reports combining all analyses.

```python
from coreml_complexity_analyzer import generate_report

report = generate_report(model, "ModelName")

# Output formats
print(report.to_markdown())   # Markdown table
print(report.to_text())       # Plain text
data = report.to_dict()       # Dictionary
```

## Supported Operations

| Category | Operations |
|----------|-----------|
| Convolutions | `conv`, `conv_transpose` |
| Linear | `linear`, `matmul`, `einsum` |
| Activations | `relu`, `sigmoid`, `tanh`, `gelu`, `silu`, `softplus` |
| Normalization | `batch_norm`, `layer_norm`, `instance_norm` |
| Pooling | `max_pool`, `avg_pool` |
| Element-wise | `add`, `sub`, `mul`, `div` |
| Reductions | `reduce_sum`, `reduce_mean`, `reduce_max`, `reduce_min` |
| Other | `softmax` |

## Example Output

```
============================================================
Model Complexity Report: ResNet50
============================================================
SUMMARY
----------------------------------------
  Total FLOPS:            4,089,184,256
  Total GFLOPS:                    4.09
  Parameters:                25,557,032
  Parameters (M):                 25.56

MEMORY ANALYSIS
----------------------------------------
  Total Memory:              106.42 MB
  Parameters:                 97.52 MB

TOP OPERATIONS BY FLOPS
----------------------------------------
  conv                 3,891,200,000 ( 95.2%)
  linear                 102,400,000 (  2.5%)
  batch_norm              51,200,000 (  1.3%)
============================================================
```

## Use Cases

- **Model Optimization**: Identify bottleneck layers before applying compression
- **Deployment Planning**: Estimate if a model fits device constraints
- **Model Comparison**: Compare efficiency across architectures
- **Research**: Analyze compute/memory trade-offs

## Development

```bash
git clone https://github.com/yaswanth169/coreml-complexity-analyzer.git
cd coreml-complexity-analyzer
pip install -e ".[dev]"
pytest
```

## License

BSD-3-Clause

## Authors

- Devavarapu Yashwanth
- Ireddi Rakshitha
