Metadata-Version: 2.4
Name: memnixa
Version: 0.1.0
Summary: A minimal Python agent runtime with gateway and provider compatibility.
Author: mcallzbl
Requires-Python: >=3.11
Requires-Dist: lark-oapi>=1.5.3
Requires-Dist: prompt-toolkit>=3.0.52
Requires-Dist: qq-botpy>=1.2.1
Requires-Dist: rich>=14.3.3
Description-Content-Type: text/markdown

# Memnixa English Docs

[Back to Index](README.md) | [中文文档](README.zh-CN.md) | [Architecture](docs/architecture.en.md)

## Overview

Memnixa is a Python agent runtime built around three ideas:

- Borrow the lightweight execution loop and persistence style from `nanobot`
- Borrow the provider compatibility and gateway/runtime separation from `openclaw`
- Keep the first version runnable while supporting CLI, gateway, SQLite persistence, multi-provider config, and per-session model switching

## Current Capabilities

- Build runtime context from workspace data, system prompt, tools, and session history
- Call an OpenAI-compatible `/chat/completions` endpoint
- Execute tools and feed tool results back into the model loop
- Run directly as a CLI or as a standalone gateway
- Let the gateway receive HTTP, Feishu, and QQ traffic
- Store sessions, messages, and session metadata in SQLite
- Configure multiple providers and switch models per session
- Support OpenClaw-style local skills discovery and on-demand loading
- Support canonical identity binding, with local CLI always treated as the owner
- Let owner-bound direct messages share context with the local `main` session

## Installation

```bash
uv tool install --editable .
```

After installation:

```bash
memnixa --help
```

You can also use the `Makefile` shortcuts:

```bash
make help
make sync
make install
make run
```

## Quick Start

1. Sync the default config into `~/.memnixa/config.json`

```bash
memnixa config sync
```

2. Check or open the config file

```bash
memnixa config path
memnixa config open
```

3. Add a model after installation

The easiest path is to append one provider profile with the built-in command instead of editing JSON by hand.

For OpenAI:

```bash
memnixa config add-model \
  --provider openai_chatgpt \
  --api-key YOUR_OPENAI_API_KEY \
  --model gpt-4.1-mini \
  --id openai \
  --label "OpenAI GPT-4.1 Mini" \
  --set-default
```

For Zhipu Coding Plan:

```bash
memnixa config add-model \
  --provider zhipu_coding_plan \
  --api-key YOUR_ZHIPU_API_KEY \
  --model glm-4.7 \
  --id zhipu \
  --label "Zhipu GLM-4.7" \
  --set-default
```

For a custom OpenAI-compatible endpoint:

```bash
memnixa config add-model \
  --provider custom_openai_compatible \
  --api-key YOUR_API_KEY \
  --api-base http://localhost:8000/v1 \
  --model your-model-name \
  --id local-model \
  --label "Local Compatible Model" \
  --set-default
```

After this, the config is written into `~/.memnixa/config.json` or the current project's `config.json`, and Memnixa can connect to that model directly.

4. Optionally adjust the config manually for advanced fields

5. Start the CLI

```bash
memnixa
```

6. Run one message

```bash
memnixa --message "Summarize this repository"
```

7. Start the gateway

```bash
memnixa gateway
```

8. Inspect the current identity inside a channel conversation

First, send this to Memnixa from Feishu or QQ:

```text
/whoami
```

It returns the identity resolved for the current incoming message, including:

- `identity_status`
- `actor_user_id`
- `actor_external_id`
- `actor_is_owner`
- `session_id`

The most important field here is `actor_external_id`, because you will use it for binding.

9. Bind that channel identity to the owner locally

For Feishu:

```bash
memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID
```

For QQ:

```bash
memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID
```

After this, the system layer treats that external identity as the owner. The model does not decide this on its own.

10. Continue chatting from the channel DM or the local CLI

Current routing rules:

- Local `cli` is always treated as the owner and always enters `main`
- Direct messages bound to the owner also enter `main`
- Unbound identities or group messages do not merge into `main`

So once you bind a Feishu or QQ direct-message identity to the owner, the local CLI and that direct-message thread share the same context.

11. Inspect SQLite data

```bash
memnixa data dump
memnixa data dump --session-id main
memnixa data export-memory
```

Common `make` targets:

- `make help`
- `make sync`
- `make sync-dev`
- `make install`
- `make reinstall`
- `make run`
- `make gateway`
- `make test`
- `make fmt`
- `make lint`

## Config Shape

Key fields:

- `default_provider`: default provider id for new sessions
- `providers`: list of configured model endpoints
- `providers[].id`: stable id used by `/use <id>`
- `providers[].preset`: provider preset name
  Built-in presets include `openai_chatgpt`, `zhipu_coding_plan`, `siliconflow`, and `custom_openai_compatible`
- `providers[].api_key`: provider API key
- `providers[].api_base`: optional compatible base URL
- `providers[].model`: model name
- `providers[].label`: display label
- `providers[].context_window_tokens`: optional per-model context window limit
- `providers[].max_output_tokens`: optional per-model output reserve
- `workspace`: workspace used by tools and context
- `database_path`: SQLite database path
- `cli_via`: `local` / `gateway` / `auto`
- `context_window_tokens`: global default context window
- `max_output_tokens`: global default output reserve
- `context_warn_threshold_ratio`: warning threshold when nearing the window
- `context_compact_threshold_ratio`: threshold that triggers compaction
- `context_safety_margin_tokens`: conservative input headroom
- `context_compaction_max_rounds`: maximum preflight compaction rounds
- `memory.enabled`: enable long-term memory extraction and retrieval
- `memory.extraction_provider_id`: required when memory is enabled; must point to a valid configured provider id
- `memory.extraction_timeout_seconds`: dedicated timeout for the memory extraction call
- `memory.retrieval_limit`: maximum number of memory hits injected or returned per search
- `memory.max_injected_chars`: maximum characters injected from recalled memory into the model context

Channel fields:

- `channels.feishu.enabled`
- `channels.feishu.app_id`
- `channels.feishu.app_secret`
- `channels.feishu.group_policy`
- `channels.qq.enabled`
- `channels.qq.app_id`
- `channels.qq.secret`

## Dynamic Model Switching

Each provider has a unique id, for example:

```json
{
  "default_provider": "1",
  "providers": [
    { "id": "1", "preset": "zhipu_coding_plan", "api_key": "...", "model": "glm-4.7" },
    { "id": "2", "preset": "openai_chatgpt", "api_key": "...", "model": "gpt-4.1-mini" },
    { "id": "3", "preset": "siliconflow", "api_key": "...", "model": "deepseek-ai/DeepSeek-V3" }
  ]
}
```

Inside a conversation:

```text
/models
/use 2
/whoami
```

- `/models` lists configured models
- `/use <id>` switches only the current session
- `/whoami` shows the identity resolution result for the current message
- The selected provider id is stored in SQLite session metadata

## Long-Term Memory

Memnixa now supports a first-pass long-term memory layer on top of the existing
session history and compaction summary.

Design:

- Session history remains the source of short-term continuity
- `session_compactions` still store compacted session summaries only
- Long-term memory is stored separately in SQLite `memory_items`
- Memory is scoped by `self`, `agent`, `user`, or `session`

When memory is enabled:

- Memnixa injects recalled durable memory as an extra system message before the active turn
- The model can actively call `memory_search` and `memory_get`
- After a turn finishes, Memnixa calls the configured memory extraction provider to propose durable facts, then validates and stores them

### Config

Add a dedicated memory block:

```json
{
  "memory": {
    "enabled": true,
    "extraction_provider_id": "8",
    "extraction_timeout_seconds": 20,
    "retrieval_limit": 5,
    "max_injected_chars": 2400
  }
}
```

Rules:

- `memory.enabled = true` requires `memory.extraction_provider_id`
- `memory.extraction_provider_id` must match a real id in `providers[]`
- It is recommended to use a cheaper small model, such as an 8B-class model, as the extractor provider

### Suggested Multi-Provider Setup

Use one main model for the normal agent loop and one smaller model for memory extraction:

```json
{
  "default_provider": "1",
  "providers": [
    {
      "id": "1",
      "preset": "zhipu_coding_plan",
      "api_key": "YOUR_MAIN_KEY",
      "model": "glm-4.7"
    },
    {
      "id": "8",
      "preset": "custom_openai_compatible",
      "api_key": "YOUR_MEMORY_KEY",
      "api_base": "http://localhost:11434/v1",
      "model": "qwen-memory-8b",
      "label": "Memory Extractor 8B"
    }
  ],
  "memory": {
    "enabled": true,
    "extraction_provider_id": "8",
    "extraction_timeout_seconds": 20
  }
}
```

### Memory Tools

When memory is enabled, the runtime registers:

- `memory_search`: search durable memories relevant to the current request
- `memory_get`: inspect one memory item returned by `memory_search`

### Export

You can export all stored long-term memory items through either interface:

```bash
memnixa data export-memory
```

Or through the gateway:

```text
GET /v1/memory/export
```

### What Gets Stored

The extractor is expected to produce durable facts such as:

- preferences
- constraints
- corrections
- goals
- project facts
- self-model facts
- decisions
- todos
- user profile details

Sensitive data such as API keys, passwords, cookies, and tokens are filtered and should not be stored as memory items.

The `self_model` type is stored under the fixed scope `self:memnixa`. Use it for the agent's stable identity, role, capability boundaries, and long-lived behavior contract. It applies across users, sessions, and workspaces.

## Skills

Memnixa now supports a first-pass OpenClaw-style local skills system.

Design:

- The runtime discovers local skill directories containing `SKILL.md`
- The system prompt includes only the available skill list, not every skill body
- The model reads a selected skill on demand through `skill_read`
- The model can inspect what is available with `skill_list`

Currently supported discovery roots, from highest to lowest precedence:

- `<workspace>/skills`
- `<workspace>/.agents/skills`
- `~/.agents/skills`
- `~/.memnixa/skills`

Each skill directory should contain at least one `SKILL.md`. AgentSkills-style frontmatter is recommended:

```md
---
name: release-checklist
description: Use when preparing a release checklist or release notes.
---

# Release Checklist

Always confirm version, changelog, and tests.
```

Runtime skill tools:

- `skill_list`: list the currently available skills
- `skill_read`: read the chosen skill's `SKILL.md` or another bundled reference file

## Identity Binding

If you want one Feishu or QQ direct-message conversation to share the owner's context with the local CLI, use this flow:

1. Send `/whoami` in that direct-message conversation
2. Copy the returned `actor_external_id`
3. Run `memnixa identity bind-owner --channel <channel> --external-id <id>` locally
4. Later direct messages from that bound identity are routed into the owner `main` session

Available commands:

```bash
memnixa identity bind-owner --channel feishu --external-id YOUR_FEISHU_OPEN_ID
memnixa identity bind-owner --channel qq --external-id YOUR_QQ_EXTERNAL_ID
memnixa identity list
```

Notes:

- `bind-owner` is a local CLI management command used to bind one external identity to the owner
- `identity list` prints canonical users and stored external identity bindings
- Binding direct-message identities is recommended before binding any group-side identities

## User Home

The default user-level data directory is `~/.memnixa`:

- `~/.memnixa/config.json`
- `~/.memnixa/memnixa.db`
- `~/.memnixa/cli_history`

Without `--config`, config lookup order is:

1. `./config.json`
2. `~/.memnixa/config.json`

## Notes

- The model layer currently focuses on OpenAI-compatible APIs first
- Built-in tools are `list_dir`, `read_file`, `write_file`, and `run_command`
- When session history approaches the context budget, Memnixa compacts older turns into a summary and keeps the recent active tail
- If the provider returns a direct context overflow error, Memnixa tries to compact and retry automatically
- `memnixa` starts the interactive CLI by default
- `memnixa gateway` starts HTTP and any enabled channel listeners
