# OJHunt Lite — AI Agent Guide

OJHunt Lite queries competitive programming Online Judge platforms and returns
solved-problem statistics. Use the API to look up any user across 28+ OJs and
compute deduplicated totals.

## Workflow

To get a user's deduplicated solved-problem totals across platforms:

1. Call GET /api/crawlers/ to list available crawlers.
2. For each relevant crawler, call GET /api/crawlers/{crawler}/{username}.
   These calls are independent and can run in parallel.
   A user may have different usernames on different platforms — each call
   carries its own crawler/username pair.
3. Collect all responses (including errors — /api/merge skips them).
4. POST the full array of responses to /api/merge.
   Do not call /api/merge until all crawler queries have completed.
   The merge step handles aggregator cross-platform deduplication server-side.

## Tracking progress over time

To generate a PDF progress report with accumulated history, use the web UI at
{{ base }}/ rather than the API directly. The UI handles PDF caching, history
merging, and download automatically:

1. Add crawlers and usernames, then click "Query All".
2. Once all queries complete, click "Download Report".
   If a previous report was uploaded, its history is merged into the new PDF.
3. Save the downloaded PDF — upload it on the next run to carry history forward.

If you need to restore settings from a previous PDF (e.g. to re-run the same
crawlers), upload it via the "Upload previous report" button before querying.

## API Reference

- [OpenAPI schema (all request/response shapes)]({{ base }}/openapi.json)
- [Interactive docs]({{ base }}/docs)

## Key endpoints

GET  {{ base }}/api/crawlers/
  List all available crawlers.

GET  {{ base }}/api/crawlers/{crawler}/{username}
  Query a single crawler. Returns a CrawlerResult — pass it verbatim to /api/merge.

POST {{ base }}/api/merge
  Merge an array of CrawlerResults into deduplicated totals.

POST {{ base }}/api/pdf/extract
  Extract embedded settings and history date from a previously generated OJHunt PDF.

POST {{ base }}/api/pdf/generate
  Generate a PDF progress report. Optionally merges history from a previous PDF.
  Pass per-crawler results in snapshot.results to populate the platform breakdown table.
  Returns the PDF as base64 — save it locally to preserve history for the next run.

## Available crawlers ({{ crawler_count }} total)

{{ crawler_names }}

## Shell script template

#!/bin/sh
# Query multiple OJs for a user and get deduplicated totals
BASE="{{ base }}"
USERNAME="tourist"

# Collect results in parallel
CF=$(curl -sf "$BASE/api/crawlers/codeforces/$USERNAME")
AT=$(curl -sf "$BASE/api/crawlers/atcoder/$USERNAME")

# Merge with deduplication (handles VJudge cross-referencing)
curl -sf -X POST "$BASE/api/merge" \
  -H "Content-Type: application/json" \
  -d "[$CF,$AT]"

## Notes

- Each query is independent: a user may have different usernames on different
  platforms. Pass each crawler/username pair separately and merge them all —
  /api/merge accepts mixed usernames in the same request.
- Results with error=true are automatically skipped by /api/merge.
- Aggregator problems (prefixed "codeforces-1A", "hdu-1000", etc.) are
  cross-referenced against native crawlers to avoid double-counting.
- Some crawlers (CSES, VJudge) require server-side credentials — just call
  them normally; credentials are configured on the server.
