Metadata-Version: 2.4
Name: nkit-framework
Version: 0.3.0
Summary: Production-Ready Safety Layer for Agentic AI - Live observability, pre-execution safety, and audit trails
Author-email: Navaluri Balaji <navuluribalaji@gmail.com>
License: MIT
Project-URL: Homepage, https://github.com/NavuluriBalaji/NKit
Project-URL: Repository, https://github.com/NavuluriBalaji/NKit.git
Project-URL: Documentation, https://nkit.ai
Project-URL: Bug Tracker, https://github.com/NavuluriBalaji/NKit/issues
Project-URL: Changelog, https://github.com/NavuluriBalaji/NKit/releases
Keywords: ai,agents,react,safety,compliance,llm,observability,audit,enterprise
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.28.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.20.0; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"
Requires-Dist: flake8>=5.0; extra == "dev"
Requires-Dist: mypy>=1.0; extra == "dev"
Provides-Extra: llm
Requires-Dist: openai>=1.0; extra == "llm"
Requires-Dist: anthropic>=0.7; extra == "llm"
Requires-Dist: google-generativeai>=0.3; extra == "llm"
Provides-Extra: all
Requires-Dist: nkit-framework[dev,llm]; extra == "all"
Dynamic: license-file

# NKit — Production Safety Layer for Agentic AI

[![PyPI version](https://badge.fury.io/py/nkit-framework.svg)](https://badge.fury.io/py/nkit-framework)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)

**Live observability, pre-execution safety, and audit trails for any agent framework.**

NKit is a lightweight, production-ready framework for building ReAct agents or wrapping your existing ones. It focuses natively on ensuring that agents are safe, compliant, and observable *before* they execute destructive actions—solving the biggest blockers to enterprise AI adoption.

---

## 🎯 Why NKit?

Most agent frameworks focus on chaining together LLM queries but treat production safety as an afterthought. NKit was built specifically to solve three critical problems:

1. **Pre-Execution Intent Verification** — SafetyGate pauses and evaluates an agent's intent *before* execution, blocking misaligned goals or destructive actions automatically
2. **The WhyLog** — Structured JSONL audit trail capturing the exact chain of thought that led to every action
3. **Live Decision Streaming** — Real-time event bus (LiveObserver) for compliance teams to monitor decisions live

---

## ✨ Key Features

- ✅ **ReAct + PoT Reasoning Modes** — Choose between iterative (ReAct) or plan-once (PoT) execution
- ✅ **Pre-Execution Safety Gate** — Block destructive actions before they run
- ✅ **Structured Audit Trails** — JSONL logging with full reasoning chain
- ✅ **Live Event Streaming** — Real-time monitoring via LiveObserver
- ✅ **Rate Limiting & Token Tracking** — Cost control and budget monitoring
- ✅ **Memory Persistence** — JSONFileMemory for state recovery across restarts
- ✅ **Error Recovery** — Automatic retries with exponential backoff
- ✅ **Multi-Provider LLM Support** — OpenAI, Anthropic, Ollama, LM Studio, Gemini, OpenRouter
- ✅ **Enterprise Compliance Ready** — Human-in-the-Loop, audit trails, safety verification

---

## 🚀 Quick Start

### Installation

```bash
pip install nkit-framework
```

Or with all optional dependencies:

```bash
pip install nkit-framework[all]
```

### Basic Example

```python
from nkit.agent import Agent
from nkit.llms import OllamaLLM
from nkit.observer import LiveObserver

# Create observer for live monitoring
observer = LiveObserver()

@observer.on("tool.before")
def watch_intent(event):
    print(f"Agent attempting {event['tool_name']} because: {event['why']}")

# Create agent with local LLM
llm = OllamaLLM(model="llama3")
agent = Agent(llm=llm, observer=observer)

# Run task
result = agent.run("What are the top 3 market trends for Q2 2026?")
print(result)
```

### Production Example with Safety

```python
from nkit.agent import Agent
from nkit.llms import OpenAILLM
from nkit.memory import JSONFileMemory
from nkit.observer import LiveObserver
from nkit.safety import SafetyGate
from nkit.audit import WhyLog

# Initialize production components
memory = JSONFileMemory("./session.json")
observer = LiveObserver()
safety_gate = SafetyGate()
why_log = WhyLog("./audit.jsonl")

# LLM with rate limiting and token tracking
llm = OpenAILLM(
    model="gpt-4o",
    enable_rate_limiting=True,
    track_tokens=True
)

# Create production-ready agent
agent = Agent(
    llm=llm,
    memory=memory,
    observer=observer,
    safety_gate=safety_gate,
    why_log=why_log,
    max_steps=20,
    max_retries=3
)

# Execute with full production features
result = agent.run("Process and analyze financial data")

# Monitor costs and performance
stats = agent.get_session_stats()
print(f"Session ID: {stats['session_id']}")
print(f"Total Tokens: {stats['llm_stats']['total_tokens']}")
print(f"Cost: ${stats['llm_stats']['total_cost']:.4f}")
```

---

## 📚 Core Components

### Agent
Main orchestrator for task execution with ReAct reasoning loop.

```python
from nkit.agent import Agent

agent = Agent(
    llm=your_llm,
    max_steps=20,
    max_retries=3,
    memory=memory_store,
    observer=observer,
    safety_gate=safety_gate,
    why_log=why_log
)

result = agent.run("Your task here")
```

### Tools
Built-in tools for web search, file operations, and more.

```python
from nkit.tools import Tool, ToolRegistry

# Register custom tool
@agent.tool("calculate", "Perform calculations")
def calculate(expression: str) -> str:
    return str(eval(expression))
```

### SafetyGate
Pre-execution verification layer that blocks dangerous operations.

```python
from nkit.safety import SafetyGate

safety = SafetyGate()
safety.whitelist_domain("api.company.com")

agent = Agent(llm=llm, safety_gate=safety)
# Destructive actions will be blocked before execution
```

### LiveObserver
Real-time event monitoring for compliance and debugging.

```python
from nkit.observer import LiveObserver

observer = LiveObserver()

@observer.on("agent.start")
def on_start(event):
    print(f"Agent started: {event['session_id']}")

@observer.on("tool.before")
def on_tool(event):
    print(f"Executing: {event['tool_name']}")

@observer.on("agent.end")
def on_end(event):
    print(f"Completed in {event['total_steps']} steps")
```

### WhyLog
Structured audit trail capturing every decision.

```python
from nkit.audit import WhyLog

why_log = WhyLog("./audit.jsonl")
agent = Agent(llm=llm, why_log=why_log)

# Audit trail automatically captured:
# - Every thought
# - Every action
# - Every result
# - Full reasoning chain
```

### Memory
Persistent state management across sessions.

```python
from nkit.memory import JSONFileMemory

memory = JSONFileMemory("./session.json")
memory.set("user_id", "alice_123")
memory.append("messages", "Hello, AI!")

# State persists across process restarts
```

### LLM Providers
Support for multiple providers with unified interface.

```python
from nkit.llms import OpenAILLM, AnthropicLLM, OllamaLLM

# Cloud providers with rate limiting
openai = OpenAILLM(model="gpt-4o", enable_rate_limiting=True)
claude = AnthropicLLM(model="claude-3-opus", enable_rate_limiting=True)

# Local provider (no rate limits)
ollama = OllamaLLM(model="llama3")
```

---

## 🔐 Production Features

### Rate Limiting
Automatic rate limiting prevents API quota exhaustion.

```python
llm = OpenAILLM(
    model="gpt-4o",
    enable_rate_limiting=True  # Enforces OpenAI's limits
)
# - 200k tokens/minute
# - 3,500 requests/minute
# - Automatic exponential backoff on 429 errors
```

### Token Tracking & Budgeting
Monitor costs in real-time.

```python
llm = OpenAILLM(model="gpt-4o", track_tokens=True)
result = agent.run("Your task")

stats = agent.get_session_stats()
print(f"Tokens: {stats['llm_stats']['total_tokens']}")
print(f"Cost: ${stats['llm_stats']['total_cost']:.4f}")
```

### Error Recovery
Automatic retries with exponential backoff.

```python
agent = Agent(
    llm=llm,
    max_retries=3  # Retry failed tools up to 3 times
    # Automatic exponential backoff: 1s, 2s, 4s, max 60s
)
```

### Compliance & Audit
Full audit trail for regulatory requirements.

```python
from nkit.audit import WhyLog

why_log = WhyLog("./audit.jsonl")
agent = Agent(llm=llm, why_log=why_log)

# Every action logged with:
# - Reasoning (thought)
# - Action taken
# - Result
# - Human approval status
# - Safety gate status
# - Full session tracking
```

---

## 📖 Documentation

- **[Getting Started Guide](https://nkit.ai/getting-started)** — Installation and first agent
- **[Production Deployment](./PRODUCTION_GUIDE.md)** — Rate limiting, monitoring, safety configuration
- **[API Reference](https://nkit.ai/api-reference)** — Complete component documentation
- **[Examples](./NKit/examples/)** — Working examples with different LLMs

---

## 🛠️ Supported LLMs

| Provider | Status | Features |
|----------|--------|----------|
| OpenAI | ✅ Production | GPT-4o, GPT-4, GPT-3.5-turbo |
| Anthropic | ✅ Production | Claude 3 Opus/Sonnet/Haiku |
| Google Gemini | ✅ Production | Gemini 2.5 Flash |
| OpenRouter | ✅ Production | All models via passthrough |
| Ollama | ✅ Production | Local models (llama3, mistral, etc.) |
| LM Studio | ✅ Production | Local fine-tuned models |

---

## 🎓 Use Cases

- **Research Automation** — Multi-step information gathering with verification
- **Data Analysis** — Tool-assisted analysis with safety constraints
- **Compliance Workflows** — Audit trail and approval tracking
- **Content Generation** — Safe, monitored content creation
- **System Automation** — Pre-execution verification for destructive operations
- **Financial Operations** — Cost tracking and budget controls

---

## 🤝 Contributing

Contributions welcome! Please see [CONTRIBUTING.md](./docs/contributing.md) for guidelines.

```bash
git clone https://github.com/NavuluriBalaji/NKit.git
cd NKit
pip install -e ".[dev]"
pytest
```

---

## 📄 License

MIT License - see [LICENSE](./LICENSE) file for details.

---

## 🙋 Support

- **Issues**: [GitHub Issues](https://github.com/NavuluriBalaji/NKit/issues)
- **Discussions**: [GitHub Discussions](https://github.com/NavuluriBalaji/NKit/discussions)
- **Email**: navuluribalaji@gmail.com

---

## 🚀 Roadmap

- [ ] Multi-agent coordination framework
- [ ] Redis memory backend for distributed state
- [ ] Advanced observability dashboards
- [ ] Plugin system for custom extensions
- [ ] Native support for function calling
- [ ] Knowledge graph integration
- [ ] Vector store integration
- [ ] Streaming response handlers

---

**Built for production. Secured by design. Observable by default.**

Version 0.3.0 — April 2026
