Metadata-Version: 2.4
Name: jentic-apitools-cli
Version: 1.0.0a10
Summary: Jentic Apitools CLI - Full-featured command-line interface independent of API
Author: Jentic
Author-email: Jentic <hello@jentic.com>
License-Expression: Apache-2.0
License-File: LICENSE
License-File: NOTICE
Requires-Dist: pydantic>=2.7,<3.0
Requires-Dist: jentic-apitools-common~=1.0.0a10
Requires-Dist: jentic-apitools-analyze~=1.0.0a10
Requires-Dist: jentic-apitools-pipelines~=1.0.0a10
Requires-Dist: jentic-apitools-jobs~=1.0.0a10
Requires-Dist: click>=8.1.0
Requires-Dist: rich>=13.0.0
Requires-Python: >=3.11
Project-URL: Homepage, https://github.com/jentic/jentic-apitools
Description-Content-Type: text/markdown

# Jentic API Tools - CLI

Click-based command-line interface for Jentic API Tools, providing commands for analyzing, scoring, and importing OpenAPI specifications, as well as bulk repository operations.

## Prerequisites

Node.js (>= 18) is required for full functionality. The CLI uses npx to run Redocly, Spectral, and Speclynx for OpenAPI validation and bundling. Without Node.js, the tool still runs but produces partial results using only the built-in Python validation backends. Install Node.js from https://nodejs.org/.

## Key Features

The CLI registers as the `jentic-apitools` entry point and provides seven commands. The `analyze` command runs validation diagnostics against an OpenAPI spec using multiple backends (Redocly, Spectral, custom rules, and optional LLM semantic analysis). The `score` command calculates the AI-readiness score using Jentic's 6-dimension framework. The `import` command processes a spec through the full import pipeline, producing scored and cataloged artifacts written to a directory or ZIP archive.

All three core commands accept input as a local file path, an HTTP(S) URL, or stdin (using `-` with optional `--stdin-filepath`). For `analyze` and `score`, output defaults to JSON on stdout and can be redirected to a file with `--output`. The `import` command always writes its artifacts to a directory (via `--output`) or to a ZIP archive (via `--archive`) and emits a summary JSON document to stdout. A `--format text` option provides human-readable output. The `-q`/`--quiet` flag, available on `analyze`, `score`, and `import`, suppresses log output so only command results appear on stdout, which is useful when piping to tools like `jq`.

The remaining commands handle batch repository operations: `bulk-rescore` iterates all specs in a local repository clone; `recalculate-scores` rebuilds the `scores.json` index; `rebuild-scores-json` and `rebuild-apis-json` rebuild catalog files.

## LLM Configuration

The `--enable-llm-analysis` flag (on `analyze`, `score`, and `import`) and the `bulk-rescore` command (which always uses LLM) require credentials for the configured LLM provider. Set `LLM_PROVIDER` and `LIGHT_LLM_PROVIDER` (default: `BEDROCK`) and provide the corresponding API key:

| Provider | Required environment variable |
|---|---|
| `OPENAI` | `OPENAI_API_KEY` or `JENTIC_OPENAI_API_KEY` |
| `CLAUDE` / `ANTHROPIC` | `ANTHROPIC_API_KEY` or `JENTIC_ANTHROPIC_API_KEY` |
| `GEMINI` | `GEMINI_API_KEY` or `JENTIC_GEMINI_API_KEY` |
| `BEDROCK` | AWS credentials via boto3 (IAM role, `~/.aws/credentials`, or AWS `AWS_BEARER_TOKEN_BEDROCK` env vars) |

See `.env.example` for the full list of configuration options. The CLI validates credentials at startup and provides actionable error messages if they are missing.

## Dependencies

Internal: `jentic.apitools.common`, `jentic.apitools.analyze`, `jentic.apitools.pipelines`, `jentic.apitools.jobs`. External: click, rich.

## Installation

The recommended way to install is with [pipx](https://pipx.pypa.io/) or [uv tool](https://docs.astral.sh/uv/guides/tools/), which run the CLI in an isolated environment without affecting your system Python:

```bash
# Using pipx
pipx install jentic-apitools-cli

# Using uv
uv tool install jentic-apitools-cli
```

Alternatively, install with pip:

```bash
pip install jentic-apitools-cli
```

Once installed, the `jentic-apitools` command is available on your PATH:

```bash
jentic-apitools --help
```

## Quick Start

```bash
# Analyze a local spec
jentic-apitools analyze openapi.json

# Analyze with text output and fail on warnings
jentic-apitools analyze openapi.json --format text --fail-on warn

# Score a spec from URL
jentic-apitools score https://petstore3.swagger.io/api/v3/openapi.json

# Score with minimum threshold
jentic-apitools score openapi.json --min-score 80

# Import from URL to directory
jentic-apitools import https://example.com/openapi.json --output ./output --label example.com/api

# Import to ZIP archive
jentic-apitools import openapi.json --archive output.zip

# Pipe from stdin
cat openapi.yaml | jentic-apitools analyze - --stdin-filepath openapi.yaml
curl -s https://example.com/openapi.json | jentic-apitools score -
```

### analyze

The `analyze` command runs validation diagnostics against an OpenAPI specification.

```bash
jentic-apitools analyze <SPEC> [--format json|text] [--output FILE] [--base-url URL] [--enable-llm-analysis] [--stdin-filepath PATH] [--fail-on error|warn|info|hint|none]
```

The `SPEC` argument accepts a local file path (relative or absolute), an HTTP(S) URL, or `-` for stdin. When using stdin, `--stdin-filepath` provides a virtual path for format detection.

Default output is JSON to stdout with a diagnostics array and summary counts. The `--format text` option produces a human-readable report. The `--fail-on` option causes exit code 2 if any diagnostic meets or exceeds the specified severity.

Options:

```
SPEC                         Local file path, URL, or - for stdin (required)
--format, -f TEXT            Output format: json or text (default: json)
--output, -o FILE            Write output to file instead of stdout
--base-url URL               Base URL for resolving references (default: spec URL)
--stdin-filepath PATH        Virtual filepath for stdin input (format detection)
--enable-llm-analysis        Enable LLM-based semantic analysis
--fail-on LEVEL              Exit code 2 if severity >= level (default: none)
```

### score

The `score` command calculates the AI-readiness score for an OpenAPI specification.

```bash
jentic-apitools score <SPEC> [--format json|text] [--output FILE] [--label LABEL] [--vendor VENDOR] [--api API] [--min-score N] [--enable-llm-analysis] [--include-diagnostics] [--skip-bundle] [--stdin-filepath PATH] [--original-url URL] [--api-id ID] [--api-version-id ID] [--canonical-source-url URL] [--canonical-artifacts-base-url URL] [--canonical-artifacts-base-url-ui URL]
```

Output is the full scorecard JSON by default. The `--format text` option shows a human-readable summary with overall score, grade, level, and per-dimension breakdowns. The `--min-score` option causes exit code 2 if the overall score is below the threshold.

Options:

```
SPEC                         Local file path, URL, or - for stdin (required)
--format, -f TEXT            Output format: json or text (default: json)
--output, -o FILE            Write output to file instead of stdout
--label TEXT                 Vendor/API label (mutually exclusive with --vendor)
--vendor TEXT                Vendor name, combined with --api to form label
--api TEXT                   API name (default: main, requires --vendor)
--min-score N                Exit code 2 if score < N
--enable-llm-analysis        Enable LLM-based semantic analysis
--include-diagnostics        Include diagnostics in score output
--skip-bundle                Skip bundling step
--stdin-filepath PATH        Virtual filepath for stdin input
--original-url URL           Original spec URL for provenance tracking
--api-id ID                  Logical API identifier
--api-version-id ID          Logical API version identifier
--canonical-source-url URL   Canonical source URL for metadata
--canonical-artifacts-base-url URL       Canonical base URL for raw artifacts
--canonical-artifacts-base-url-ui URL    Canonical base URL for UI artifact links
```

### import

The `import` command processes an OpenAPI specification through the full import pipeline.

```bash
jentic-apitools import <SPEC> (--output DIR | --archive FILE.zip) [--label LABEL] [--vendor VENDOR] [--api API] [--overwrite] [--enable-llm-analysis] [--skip-bundle] [--stdin-filepath PATH] [--original-url URL] [--api-id ID] [--api-version-id ID] [--canonical-source-url URL] [--canonical-artifacts-base-url URL] [--canonical-artifacts-base-url-ui URL]
```

Either `--output DIR` or `--archive FILE.zip` must be specified. With `--output`, artifacts are written to the directory. With `--archive`, they are packaged as a ZIP file. On success, a JSON manifest is written to stdout.

Options:

```
SPEC                         Local file path, URL, or - for stdin (required)
--output, -o DIR             Output directory for artifacts
--archive FILE.zip           Write artifacts as a ZIP archive
--label TEXT                 Vendor/API label (mutually exclusive with --vendor)
--vendor TEXT                Vendor name, combined with --api to form label
--api TEXT                   API name (default: main, requires --vendor)
--overwrite                  Overwrite existing output directory or archive
--enable-llm-analysis        Enable LLM-based semantic analysis
--skip-bundle                Skip bundling step
--reject-invalid-server-urls / --no-reject-invalid-server-urls
                             Reject specs with invalid server URLs (default: enabled)
--stdin-filepath PATH        Virtual filepath for stdin input
--original-url URL           Original spec URL for provenance tracking
--api-id ID                  Logical API identifier
--api-version-id ID          Logical API version identifier
--canonical-source-url URL   Canonical source URL for metadata
--canonical-artifacts-base-url URL       Canonical base URL for raw artifacts
--canonical-artifacts-base-url-ui URL    Canonical base URL for UI artifact links
```

### bulk-rescore

The `bulk-rescore` command rescores all OpenAPI specs in a local clone of a repository with the `apis/openapi/<vendor>/<api>/<version>/openapi.json` directory structure. It runs `import_openapi` for each spec, then copies the updated `scorecard.json`, `diagnostics.json`, and the `diagnostics` section of `meta.json` back into the local repo. When not in dry-run mode, the command also recalculates `scores.json` with all scores sorted by value in descending order.

> **Note:** This command always uses LLM analysis. Ensure your LLM provider credentials are configured (see [LLM Configuration](#llm-configuration)).

```bash
jentic-apitools bulk-rescore /path/to/jentic-public-apis
jentic-apitools bulk-rescore /path/to/jentic-public-apis --max-iterations 2 --dry-run
```

Options:

```
LOCAL_REPO_PATH              Path to root of the local repo clone (required)
--github-repo-url TEXT       GitHub repository URL (default: https://github.com/jentic/jentic-public-apis)
--output-dir, -o PATH        Output directory for rescore results (default: __data__/rescore_<datetime>)
--github-repo-branch TEXT    GitHub repository branch (default: main)
--base-dir TEXT              Base directory for OpenAPI specs within the repo (default: apis/openapi)
--max-iterations INTEGER     Maximum number of APIs to process, useful for testing (default: no limit)
--dry-run                    Run without copying results to the local repo
```

### recalculate-scores

Rebuilds `scores.json` from all `scorecard.json` files found under a local repository clone.

```bash
jentic-apitools recalculate-scores /path/to/jentic-public-apis
```

Options:

```
LOCAL_REPO_PATH              Path to root of the local repo clone (required)
--base-dir TEXT              Base directory for OpenAPI specs within the repo (default: apis/openapi)
```

### rebuild-scores-json

Rebuilds `scores.json` from all `scorecard.json` files found under the local repository clone.

```bash
jentic-apitools rebuild-scores-json /path/to/jentic-public-apis
```

### rebuild-apis-json

Rebuilds the root `apis.json` catalog from all version-level `apis.json` files.

```bash
jentic-apitools rebuild-apis-json /path/to/jentic-public-apis
```

## Exit Codes

All commands use consistent exit codes: 0 for success, 1 for runtime or pipeline errors, and 2 for policy failures (triggered by `--fail-on` or `--min-score` thresholds).

## Testing

```bash
pytest tests -v
```
