Metadata-Version: 2.4
Name: agent-readiness-cli
Version: 0.1.0
Summary: Score any URL for AI-agent readiness — llms.txt, JSON-LD, AI-bot robots.txt, canonical, MCP, meta, sitemap.
Author-email: GuardLabs <team@guardlabs.online>
License: MIT
Project-URL: Homepage, https://github.com/sspoisk/agent-readiness-cli
Project-URL: Documentation, https://github.com/sspoisk/agent-readiness-cli#readme
Project-URL: Repository, https://github.com/sspoisk/agent-readiness-cli
Project-URL: Issues, https://github.com/sspoisk/agent-readiness-cli/issues
Project-URL: Made by, https://guardlabs.online
Keywords: ai,llm,agent,readiness,llms.txt,json-ld,mcp,audit,seo
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Internet :: WWW/HTTP
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Text Processing :: Markup :: HTML
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Dynamic: license-file

# agent-readiness-cli

> Score any URL for AI-agent readiness — llms.txt, JSON-LD, AI-bot robots.txt, canonical, MCP, meta, sitemap. One command, one number, no telemetry.

[![PyPI version](https://img.shields.io/pypi/v/agent-readiness-cli.svg)](https://pypi.org/project/agent-readiness-cli/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/)

## What it does

Audits a URL for how well it talks to ChatGPT, Claude, Perplexity, and other AI agents — and gives you a single 0-100 score with a per-section breakdown.

```text
$ agent-ready https://example.com
✓ llms.txt               10/15  present, 4.2 KB, 12 URLs
✓ json-ld                23/25  3 block(s), types: Article, Organization, BreadcrumbList
✗ ai-bots-robots.txt      0/20  ClaudeBot, GPTBot disallowed at root
✓ canonical+hreflang     12/15  canonical=set, hreflang langs=['en','ru']
✗ mcp-card                0/10  no /.well-known/mcp.json (optional)
✓ meta                   10/10  10/10 of common signals
✓ sitemap                 5/5   valid, 1250 URLs

  Score: 60 / 100
  Tier: C  (middling — focus on ai-bots-robots.txt, mcp-card)

  Full report:    agent-ready --full https://example.com
  Remediation:    https://guardlabs.online/whiteglove/  (paid, $99-2499)
```

## Why this exists

Every blog post about "AI SEO" tells you to "add llms.txt and JSON-LD." Nobody hands you a CLI that opens your site and tells you what's actually missing. This is that CLI.

It is intentionally:

- **Single file**, ~500 LoC. Read it. Audit the audit.
- **No telemetry.** It hits *your* URL only. No phone-home.
- **Deterministic.** Same site → same score (modulo the site changing).
- **Transparent scoring.** Every weight is in `agent_ready/cli.py`. Disagree? Open an issue or fork.

## Install

```bash
pip install agent-readiness-cli
```

Or run from source (no install):

```bash
git clone https://github.com/sspoisk/agent-readiness-cli
cd agent-readiness-cli
python3 -m agent_ready.cli https://your-site.example
```

Requires Python 3.10+. Standard library only — no third-party deps.

## Usage

```bash
agent-ready https://example.com              # human summary (default)
agent-ready --full https://example.com       # human summary + every finding
agent-ready --json https://example.com       # machine-readable JSON
agent-ready --csv https://example.com        # one CSV row (for monitoring)
agent-ready --quiet https://example.com      # just the integer score
```

Exit codes:

- `0` — audit ran (regardless of score)
- `2` — could not fetch (DNS, timeout, TLS, 4xx/5xx on the URL itself)
- with `--quiet` — exit code is the band index: A=0, B=1, C=2, D=3, F=4

## What gets checked

| Section | Weight | What |
|---|---|---|
| `llms.txt` | 15 | presence, valid format (leading H1), at least 3 canonical URLs listed |
| `json-ld` | 25 | parseable, recognised `@type` from a curated list, at least two distinct types |
| `ai-bots-robots.txt` | 20 | rules for GPTBot / ClaudeBot / Claude-Web / PerplexityBot / Google-Extended / CCBot / Applebot-Extended / Bytespider |
| `canonical+hreflang` | 15 | self-canonical present, hreflang reciprocity, `x-default` for multi-lang |
| `mcp-card` | 10 | optional — `/.well-known/mcp.json` is valid JSON with `name`, `description`, endpoint |
| `meta` | 10 | description, og:title, og:description, twitter:card, `<html lang=>` |
| `sitemap` | 5 | `/sitemap.xml` exists, valid `<urlset>` or `<sitemapindex>`, ≥5 URLs |
| **Total** | **100** | A ≥ 90 · B ≥ 75 · C ≥ 55 · D ≥ 35 · F < 35 |

Full scoring math is in [agent_ready/cli.py](agent_ready/cli.py). One file, no ceremony.

## CI usage

Drop it into a workflow to track your score over time:

```yaml
- name: Audit AI-agent readiness
  run: |
    pip install agent-readiness-cli
    agent-ready --csv https://your-site.example >> readiness.csv
    agent-ready --quiet https://your-site.example
```

If you want the build to fail below a threshold, gate on the score:

```bash
SCORE=$(agent-ready --quiet https://your-site.example)
[ "$SCORE" -ge 75 ] || { echo "AI-readiness below 75"; exit 1; }
```

## What it does NOT do

- Crawl the whole site (it audits one URL — the homepage by default)
- Fix anything for you (it tells you what to fix)
- Check vulnerabilities (use [OWASP ZAP](https://www.zaproxy.org/) for that)
- Validate JSON-LD against full Schema.org grammar (it checks that types are recognised)
- Score Core Web Vitals or accessibility (different concerns)

If you need any of those, this isn't the right tool.

## Comparison with adjacent tools

- **[firecrawl/llmstxt-generator](https://github.com/firecrawl/llmstxt-generator)** — generates an `llms.txt` for you. We *audit* yours; we don't generate.
- **[langchain-ai/mcpdoc](https://github.com/langchain-ai/mcpdoc)** — exposes llms-txt to IDEs as MCP. Different audience (developers wanting LLM context).
- **Google Rich Results Test** — validates JSON-LD for Google specifically. Web UI only, no CLI.
- **[NSHipster/sosumi.ai](https://github.com/NSHipster/sosumi.ai)** — Apple-docs to AI-readable, narrow scope.

`agent-readiness-cli` is the gap: a single CLI that audits the agent-readiness surface and gives you a number.

## Need someone to fix the findings?

If your score is low and you don't want to fix it yourself:

- **DIY** — read the report, follow the linked specs (we cite them in `--full` output).
- **Self-service audit** — [GuardLabs Web-Audit Guardian from $99](https://guardlabs.online/web-audit) runs continuously every 30 min, watches multi-language drift, security headers, and structure.
- **Hands-on white-glove audit** — [GuardLabs White-Glove Web Audit · $2,499](https://guardlabs.online/whiteglove/) — async-only, no calls. Custom report + 30-day async support + quarterly re-audit. We are the engineers behind this CLI.

This CLI is free and MIT-licensed forever, regardless of whether you ever buy anything.

## Contributing

Bug reports and PRs welcome. The repo is one Python file plus tests; barriers to contribution are low. See [CONTRIBUTING.md](CONTRIBUTING.md) for details.

If you want to add or re-weight a check, propose the rationale in an issue first — we want every weight to be defensible.

## License

MIT. See [LICENSE](LICENSE).

---

Maintained by [GuardLabs](https://guardlabs.online). The CLI is an open-source byproduct of running [Web-Audit Guardian](https://guardlabs.online/web-audit) on real sites — multi-language e-commerce, agency client portfolios, AI-native SaaS. If your readiness matters and you want serious eyes on it, [White-Glove](https://guardlabs.online/whiteglove/) is where we put them.
