Metadata-Version: 2.4
Name: gwanjong-mcp
Version: 0.3.0
Summary: Stateful Pipeline MCP server for AI social agents — engage developer communities with minimal token usage
Project-URL: Homepage, https://github.com/SonAIengine/gwanjong-mcp
Project-URL: Repository, https://github.com/SonAIengine/gwanjong-mcp
Project-URL: Issues, https://github.com/SonAIengine/gwanjong-mcp/issues
Author-email: Son Seong Joon <sonsj97@plateer.com>
License-Expression: MIT
License-File: LICENSE
Keywords: agent,bluesky,devto,mcp,reddit,social,twitter
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.10
Requires-Dist: devhub-social[all]>=0.1.0
Requires-Dist: mcp-pipeline>=0.1.0
Requires-Dist: python-dotenv>=1.0.0
Provides-Extra: all
Requires-Dist: aiohttp>=3.9.0; extra == 'all'
Requires-Dist: anthropic>=0.40.0; extra == 'all'
Requires-Dist: asyncpraw>=7.7.0; extra == 'all'
Requires-Dist: atproto>=0.0.55; extra == 'all'
Requires-Dist: tweepy>=4.14.0; extra == 'all'
Provides-Extra: autonomous
Requires-Dist: anthropic>=0.40.0; extra == 'autonomous'
Provides-Extra: bluesky
Requires-Dist: atproto>=0.0.55; extra == 'bluesky'
Provides-Extra: dashboard
Requires-Dist: aiohttp>=3.9.0; extra == 'dashboard'
Provides-Extra: dev
Requires-Dist: mypy>=1.8.0; extra == 'dev'
Requires-Dist: playwright>=1.40.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.5.0; extra == 'dev'
Provides-Extra: devto
Provides-Extra: reddit
Requires-Dist: asyncpraw>=7.7.0; extra == 'reddit'
Provides-Extra: twitter
Requires-Dist: tweepy>=4.14.0; extra == 'twitter'
Description-Content-Type: text/markdown

<div align="center">

# gwanjong-mcp

**Stateful Pipeline MCP server for AI social agents.**

Engage developer communities authentically. Comment to connect, post to promote.

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![Test](https://github.com/SonAIengine/gwanjong-mcp/actions/workflows/test.yml/badge.svg)](https://github.com/SonAIengine/gwanjong-mcp/actions/workflows/test.yml)
[![Lint](https://github.com/SonAIengine/gwanjong-mcp/actions/workflows/lint.yml/badge.svg)](https://github.com/SonAIengine/gwanjong-mcp/actions/workflows/lint.yml)
[![PyPI](https://img.shields.io/pypi/v/gwanjong-mcp)](https://pypi.org/project/gwanjong-mcp/)

</div>

---

## Quick Start

```bash
# 1. Install
pip install "gwanjong-mcp[all]"

# 2. Configure at least one platform
mkdir -p ~/.gwanjong
cat > ~/.gwanjong/.env << 'EOF'
DEVTO_API_KEY=your_key_here
EOF

# 3. Verify setup
gwanjong-mcp  # starts MCP server (use with Claude Code, Cursor, etc.)
```

**With Claude Code:**

```bash
claude mcp add gwanjong-mcp -- gwanjong-mcp
claude
> "Find interesting MCP discussions and leave a helpful comment"
```

**Autonomous mode (no LLM client needed):**

```bash
pip install "gwanjong-mcp[all,autonomous]"
gwanjong-daemon --topics "MCP,LLM" --dry-run --max-cycles 1
```

See [`.env.example`](.env.example) for all configuration options.

---

## Philosophy

**Two modes, one goal: building genuine presence in developer communities.**

| Mode | Action | Goal |
|------|--------|------|
| **Comment** | Reply to others' posts | Earn reputation through helpful, authentic engagement. **No self-promotion.** |
| **Post** | Publish original content | Share your projects, write-ups, and announcements. This is where promotion lives. |

Comments are for **giving value** — answering questions, sharing insights, joining discussions. The community notices when someone is genuinely helpful. Posts are for **showing your work** — project launches, technical deep dives, lessons learned.

## Why This Exists

Typical MCP servers expose CRUD tools and let the LLM orchestrate everything. Leaving a single comment requires 9+ tool calls, 9+ LLM round trips, and the full tool description list resent every time.

```
Traditional MCP (14 tools, 9+ round trips):
LLM → list → LLM → trending → LLM → search → LLM → analyze → LLM → get_post
→ LLM → get_comments → LLM → preview → LLM → write → LLM

gwanjong-mcp (5 tools, 3 round trips):
LLM → scout → LLM → draft → LLM (generates content) → strike → done
```

### Design Principles

1. **Minimal tools** — 5 total. Tool descriptions are included in every system prompt, so fewer = cheaper + more accurate.
2. **Server-side state** — Scout results are cached on the server. The LLM doesn't relay data between tools.
3. **Server as cerebellum** — Fetching, filtering, scoring, and analysis happen inside the server. The LLM only handles judgment and content generation.
4. **Compressed returns** — Never dump 20 raw posts. The server scores and returns the top N as summaries.

## MCP Tools

| Tool | Role | What happens inside |
|------|------|---------------------|
| **`gwanjong_setup`** | Onboarding | Check platform status → guide API key setup → save + test connection |
| **`gwanjong_scout`** | Reconnaissance | Trending + search + analyze + score → return top N opportunities |
| **`gwanjong_draft`** | Context gathering | Fetch target post + comment tree + tone analysis → return context summary |
| **`gwanjong_strike`** | Execution | Post comment/article/cross-post → return result URL |
| **`_status`** | Pipeline state | Show state fields + available/blocked tools (auto-generated by mcp-pipeline) |

### Pipeline Flow

```
scout(topic, platforms)
  │  stores → opportunities
  │  Server internally: fetch trending + search + score + filter
  │  Returns: top N scored opportunities (~200 tokens)
  ▼
draft(opportunity_id)
  │  requires → opportunities
  │  stores → contexts
  │  Server internally: fetch post + comments + analyze tone
  │  Returns: context summary + suggested approach (~300 tokens)
  ▼
  LLM generates content based on context
  ▼
strike(opportunity_id, action, content)
     requires → contexts
     Server internally: write via platform API + record history
     Returns: { url, status }
```

### Example: Commenting (Engagement)

```
User: "Find interesting MCP discussions and join in"

[1] scout(topic="MCP server", platforms=["devto", "reddit"])
    → Server scans trending + search across platforms → scores → returns top 3
    {
      "opportunities": [
        {"id": "opp_0", "platform": "devto",
         "title": "Best MCP servers for productivity?",
         "relevance": 0.91, "comments": 42,
         "reason": "Active discussion, directly relevant"}
      ]
    }

[2] draft(opportunity_id="opp_0")
    → Server fetches full post + comment tree + tone
    {
      "title": "Best MCP servers for productivity?",
      "body_summary": "...",
      "top_comments": ["...", "..."],
      "tone": "technical, recommendation-seeking",
      "suggested_approach": "Share genuine experience, no self-promo"
    }
    → LLM crafts a helpful, authentic reply

[3] strike(opportunity_id="opp_0", action="comment", content="...")
    → {"url": "https://dev.to/.../comment/...", "status": "posted"}
```

### Example: Posting (Promotion)

```
User: "Write a post about mcp-pipeline on Dev.to"

[1] scout(topic="MCP token optimization", platforms=["devto"])
    → Find what's trending to inform angle and timing

[2] draft(opportunity_id="opp_0")
    → Gather context on existing coverage

[3] LLM writes an original article about the project

[4] strike(opportunity_id="opp_0", action="post", content="...")
    → {"url": "https://dev.to/sonaiengine/...", "status": "posted"}
```

## Supported Platforms

| Platform | Protocol | Auth |
|----------|----------|------|
| **Dev.to** | REST API (httpx) | API Key |
| **Bluesky** | AT Protocol | App Password |
| **Twitter/X** | OAuth 1.0a (tweepy) | API Key + Token |
| **Reddit** | OAuth2 (asyncpraw) | Client ID + Secret |

Only platforms with configured API keys are activated. Others are silently skipped.

## Install

```bash
# All platforms
pip install "gwanjong-mcp[all]"

# Specific platforms
pip install "gwanjong-mcp[devto]"
pip install "gwanjong-mcp[bluesky]"
pip install "gwanjong-mcp[twitter]"
pip install "gwanjong-mcp[reddit]"

# Development
git clone https://github.com/SonAIengine/gwanjong-mcp.git
cd gwanjong-mcp
pip install -e ".[all,dev]"
```

## Environment Variables

Copy `.env.example` to `.env` and fill in the platforms you use:

```env
# Dev.to — https://dev.to/settings/extensions
DEVTO_API_KEY=

# Bluesky — https://bsky.app/settings → App Passwords
BLUESKY_HANDLE=your.handle.bsky.social
BLUESKY_APP_PASSWORD=

# Twitter/X — https://developer.x.com/en/portal/dashboard
TWITTER_API_KEY=
TWITTER_API_SECRET=
TWITTER_ACCESS_TOKEN=
TWITTER_ACCESS_SECRET=

# Reddit — https://www.reddit.com/prefs/apps
REDDIT_CLIENT_ID=
REDDIT_CLIENT_SECRET=
REDDIT_USERNAME=
REDDIT_PASSWORD=
```

## Claude Code Integration

```bash
# Register MCP server
claude mcp add gwanjong-mcp -- gwanjong-mcp

# Use with the gwanjong agent (~/.claude/agents/gwanjong.md)
claude agent gwanjong
> "Find interesting AI agent discussions and leave helpful comments"
> "Write a Dev.to post about mcp-pipeline"
```

## Approval Workflow

Autonomous mode can stop before posting and enqueue generated content for review.

```bash
# Queue content instead of posting immediately
gwanjong-daemon --require-approval --max-cycles 1

# Review pending items
gwanjong-approval list
gwanjong-approval show 1

# Approve and execute strike immediately
gwanjong-approval approve 1

# Reject without posting
gwanjong-approval reject 2
```

If you run the dashboard, pending approvals are also visible and actionable from the UI:

```bash
gwanjong-dashboard
# open http://localhost:8585
```

## Architecture

```
gwanjong-mcp/
├── pyproject.toml
├── run.py                         # Direct execution entry point
└── gwanjong_mcp/
    ├── __init__.py
    ├── __main__.py                # python -m gwanjong_mcp
    ├── server.py                  # PipelineMCP + 5 tools + GwanjongState
    ├── setup.py                   # Platform onboarding (guide/save/test)
    └── pipeline.py                # scout/draft/strike pipeline logic
```

### Dependency Structure

```
┌─────────────────────────────────────────────────┐
│  Claude Agent (gwanjong.md)                     │
│  Persona · Content generation · Final judgment  │
└──────────────┬──────────────────────────────────┘
               │ 5 tools
┌──────────────▼──────────────────────────────────┐
│  gwanjong-mcp (this project)                    │
│  scout/draft/strike pipeline logic              │
│                                                 │
│  Dependencies:                                  │
│  ├── mcp-pipeline  — Stateful MCP framework     │
│  ├── devhub        — Multi-platform social API  │
│  └── graph-tool-call — Content search engine    │
└─────────────────────────────────────────────────┘
```

| Package | Role | Install |
|---------|------|---------|
| [devhub-social](https://github.com/SonAIengine/devhub-social) | Unified async client for Dev.to, Bluesky, Twitter, Reddit | `pip install devhub-social[all]` |
| [mcp-pipeline](https://github.com/SonAIengine/mcp-pipeline) | Type-safe state + declarative `stores`/`requires` tool chaining | `pip install mcp-pipeline` |
| [graph-tool-call](https://github.com/SonAIengine/graph-tool-call) | BM25 + graph expansion + wRRF content scoring | `pip install graph-tool-call` |

## Development

```bash
./.venv/bin/python -m pytest -q                    # Default test suite (integration 제외)
./.venv/bin/python -m pytest -m integration -q    # Playwright/network integration tests
./.venv/bin/python -m mypy gwanjong_mcp/          # Type check
./.venv/bin/python -m ruff check gwanjong_mcp/    # Lint
./.venv/bin/python run.py                         # Local server
```

## License

[MIT](LICENSE)
