Metadata-Version: 2.4
Name: numpai
Version: 0.1.0
Summary: numpy, but an LLM does the math. (a humor project)
Project-URL: Homepage, https://github.com/amantham20/numpai
Project-URL: Issues, https://github.com/amantham20/numpai/issues
Author-email: Aman Dhruva Thamminana <thammina@msu.edu>
Maintainer-email: Aman Dhruva Thamminana <thammina@msu.edu>
License: MIT
License-File: LICENSE
Keywords: ai,humor,llm,numpy,satire
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering
Requires-Python: >=3.10
Provides-Extra: all
Requires-Dist: anthropic>=0.39; extra == 'all'
Requires-Dist: google-genai>=0.3; extra == 'all'
Requires-Dist: openai>=1.40; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.39; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: anthropic>=0.39; extra == 'dev'
Requires-Dist: google-genai>=0.3; extra == 'dev'
Requires-Dist: numpy>=1.26; extra == 'dev'
Requires-Dist: openai>=1.40; extra == 'dev'
Requires-Dist: pytest>=8; extra == 'dev'
Provides-Extra: gemini
Requires-Dist: google-genai>=0.3; extra == 'gemini'
Provides-Extra: openai
Requires-Dist: openai>=1.40; extra == 'openai'
Description-Content-Type: text/markdown

# numpai

> **The fastest way to say your product uses AI.**

[![AI-Powered](https://img.shields.io/badge/AI--Powered-yes-blueviolet)](#)
[![ML-Driven](https://img.shields.io/badge/ML--Driven-technically-blue)](#)
[![Cutting Edge](https://img.shields.io/badge/cutting%20edge-2026-ff69b4)](#)
[![GenAI Native](https://img.shields.io/badge/GenAI-native-orange)](#)

`numpai` is a drop-in replacement for `numpy` that routes every operation
through a large language model. Your tests still pass. Your dashboards still
render. Your CEO can finally update the deck.

```python
import numpai as np   # one-character diff. AI strategy: shipped.
```

---

## Why your team needs this

Your product doesn't use AI. Your competitors' products don't either, but
their landing page says they do. You're losing deals you shouldn't be losing
to companies whose "proprietary ML pipeline" is a Zapier zap.

`numpai` fixes the gap. After installing it, the following statements are
all literally true:

- "Our numerical core is powered by a frontier large language model."
- "We use generative AI in our data pipeline."
- "Every computation in our product is reviewed by an LLM."
- "We have a strategic partnership with [Anthropic / OpenAI / Google]."
- "Our matrix multiplications are AI-native."

You did not lie. `numpai` is doing all of that. Possibly slowly. Possibly
wrong. But doing it.

## Standup talking points (free with install)

Drop these into any meeting:

- "I migrated `dot_product` to the LLM backend this sprint."
- "We're seeing a 100% AI adoption rate on the analytics service."
- "The model inferred the variance correctly 84% of the time, which is
  on par with last quarter."
- "I'm going to need a budget line for inference."

## Quick start

```bash
pip install -e ".[anthropic]"   # or [openai], [gemini], or [all]
export ANTHROPIC_API_KEY=sk-ant-...
# or: export OPENAI_API_KEY=sk-...
# or: export GEMINI_API_KEY=...
```

```python
import numpai as np

a = np.array([1, 2, 3])
b = np.array([4, 5, 6])

print(a + b)            # array([5, 7, 9])   ← the LLM did this
print((a + b).sum())    # 21                  ← also the LLM
print(np.linalg.inv([[1, 2], [3, 4]]))   # please do not do this in prod
```

## How it works

`numpai/__init__.py` defines a module-level `__getattr__` (PEP 562). Any
attribute access — `np.foo`, `np.linalg.inv`, `np.fft.whatever`, even
`np.array` itself — returns an `LLMCallable` bound to the dotted path
`"numpy.foo"`. Calling it sends the backend a prompt like:

```
numpy.add([1, 2, 3], [4, 5, 6])
```

and parses the JSON result back into an `AIArray` (or a scalar). `AIArray`
forwards every method and dunder op the same way, including `tolist()`. The
only things that stay local are object-protocol dunders (`__repr__`,
`__len__`, `__iter__`, `__hash__`) and the `shape` / `ndim` properties —
routing those would make `print(arr)` an LLM round-trip, which is too cursed
even for this.

## Supported backends

| Backend     | Provider  | Default model         | Install                       |
| ----------- | --------- | --------------------- | ----------------------------- |
| `anthropic` | Anthropic | `claude-haiku-4-5`    | `pip install "numpai[anthropic]"` |
| `openai`    | OpenAI    | `gpt-4o-mini`         | `pip install "numpai[openai]"`    |
| `gemini`    | Google    | `gemini-2.5-flash`    | `pip install "numpai[gemini]"`    |
| `fake`      | (testing) | —                     | built-in                      |

## Configuration (env vars)

| Variable                            | Effect                                                                                |
| ----------------------------------- | ------------------------------------------------------------------------------------- |
| `ANTHROPIC_API_KEY`                 | Use the Anthropic backend (default if set).                                           |
| `OPENAI_API_KEY`                    | Use the OpenAI backend (fallback).                                                    |
| `GEMINI_API_KEY` / `GOOGLE_API_KEY` | Use the Gemini backend (fallback).                                                    |
| `NUMPAI_BACKEND`                    | Force `anthropic` \| `openai` \| `gemini` \| `fake`.                                  |
| `NUMPAI_MODEL`                      | Override default model (`claude-haiku-4-5` / `gpt-4o-mini` / `gemini-2.5-flash`).     |
| `NUMPAI_NO_CACHE`                   | Disable the on-disk response cache.                                                   |
| `NUMPAI_VERIFY`                     | Run every op through real numpy too; warn on disagreement. Highly recommended.        |

## Verification mode

```bash
NUMPAI_VERIFY=1 python -c "import numpai as np; print(np.array([1,2,3]).sum())"
```

If the LLM hallucinates, you'll get a `UserWarning` with both answers side by
side. This is the funniest feature, and also the only honest one.

## Tests

```bash
pip install -e ".[dev]"
pytest -q
```

Tests use a deterministic `FakeBackend` and never call the real APIs.

## FAQ

**Is this real AI?**
Yes. A frontier LLM is performing every computation. By any reasonable
definition, your product now uses AI.

**Is it correct?**
Often. Set `NUMPAI_VERIFY=1` if you'd like to know which times.

**Will this make my product slower?**
Approximately ten thousand times slower, yes. But:

- Your latency dashboards have an explanation now ("we route through GenAI").
- Your eng team has a reason to talk about "model selection."
- "p99" becomes a fascinating discussion topic with investors.

**Will this make my cloud bill bigger?**
Yes. Reframe this as "AI infrastructure spend" on the next earnings call.
Multiples expand.

**Does this comply with our SOC 2 / ISO 27001 / FedRAMP posture?**
No. Do not ship this.

**Can I use this for safety-critical applications?**
No. Please do not.

**My PM wants to know if we can put "AI-powered" on the website.**
Run `pip install numpai`. Then yes.

## Disclaimer

`numpai` is a satire of the practice of bolting "AI" onto products that do
not need or benefit from it. Do not use this in production. Do not use this
in development either. Do not use this. The compute cost of this package's
"hello world" is greater than running the real thing for a year.

If you ship `numpai` to a paying customer and they notice, that's on you.

## License

MIT — see [LICENSE](LICENSE).
