Sign in to Workbench

Use the operator-configured username and password. If no custom password is set on the server, try admin / admin unless the server opted out (DEFAULT_CREDENTIALS=0).

SparkRules

Workbench

Author rules · simulate · govern promotions

API access help

Workbench session token or X-API-Key required is the server saying each API call must be authenticated when the HTTP gate is on (SPARKRULES_WORKBENCH_AUTH=1).

  • X-API-Key — Paste the same secret as server SPARKRULES_API_KEY, click Save (stored in this browser). Set that env in systemd, Docker, or k8s.
  • Session token — Shown after browser sign-in when WORKBENCH_LOGIN_UI_ENABLED is true in this page’s script and the server has workbench auth; sent as X-Workbench-Token.
  • Optional / open local — Leave SPARKRULES_WORKBENCH_AUTH unset, or run python scripts/dev_server.py (it clears that variable by default for this process). Use --with-workbench-auth to keep your shell value. To keep login endpoints but skip the gate: SPARKRULES_DISABLE_WORKBENCH_GATE=1.

OpenAPI · Docs

Security

Shortcuts: Ctrl+S save rule draft (local) · Ctrl+Enter run simulation

OpenAPI · Health

Workspace

Ship & control

Welcome to SparkRules Workbench

Author rules, simulate decisions, govern promotions - all from your browser.

Checking... Spark: N/A Python engine

Get started

The rule store is empty. Namespace is set in the header dropdown so governance pins match your tenant (avoids mismatched X-Tenant-Id text typos).

Read quick start
📦
Rule versions
-
✅
Active
-
⏸
Inactive
-
🏷
Handles
-
📂
Groups
-
🌐
Namespaces
-

Active vs Inactive

Rules by group

Rules by namespace

Version activity

Quick actions

Sample rule-fire results (top 20)

Every rule fired at the rate expected from the synthetic distribution, validating predicate execution correctness.

Spark execution check: default Workbench uses pure Python in this process (no SparkSession). Cluster scale needs mapPartitions + broadcast rule package - see repo docs/KNOWN_LIMITATIONS.md.

Registered rules

How to use this tab

Browse all versioned rules in the store. Use filters to search by handle, group, or namespace.

Activate/Deactivate: Toggle a rule version's active status. Only active rules are used in evaluations.

Download pack: Export all rules as a JSON pack for backup or migration.

Create new rules in the Author / validate tab.

Page 1

Create rule

How to use this tab

1. Enter a rule_handle (unique name), group, and namespace.

2. Write your DRL rule in the Monaco editor below. Example:

rule "high-value"
  salience 10
  reason_codes ["HV001"]
  when
    $order : Order( $order.amount > 1000 )
  then
    result.risk = "high";
    result.review = true;
end

3. Click Validate to check syntax + get LSP diagnostics in the editor.

4. Click Save to repository to create a versioned rule.

Use Ctrl+Space in the editor for keyword completions.

Monaco DRL editor · Validate runs parse + LSP diagnostics

Drop a .drl file here or click to browse

Rule snippets — drag into the editor or click to insert at the cursor.

Simulation (dry run)

How to use this tab

1. Paste or write DRL in the editor (one or more rules).

2. Enter a fact as JSON. The binding name in your DRL (e.g. $t) becomes the key:

{"t": {"amount": 1500, "region": "US"}}

3. Click Run simulation to evaluate the rule against the fact.

Batch: Upload a CSV/JSON file, set max rows, and click Run batch to evaluate many facts at once.

Results show fired status, action outputs, and bound fields.

Fact JSON (object):

Bulk load - upload a file with multiple facts. Supported formats:

Format guide + sample files

CSV - header row + data rows. Values become fact fields:

amount,region,credit_score,income
1500,US,720,85000
500,EU,580,32000

JSON array - array of fact objects. Use binding key (e.g. "t") to match your DRL:

[
  {"t": {"amount": 1500, "credit_score": 720}},
  {"t": {"amount": 500, "credit_score": 580}}
]

JSONL - one JSON object per line (same format as JSON array items).

Download samples:

sample_facts.csv sample_facts.json

Tip: Check "Wrap each row as {"t":row}" when using CSV so flat rows get wrapped in the binding key your DRL expects.

Idle
Raw batch JSON

          

        

Batch cost heuristic

Planning-only estimate from GET /workbench/cost-estimate (same model as sparkrules-cli cost). Not a billing quote.


        
How to use Advanced tools

AI suggestions: Offline rule templates by default. For OpenAI-backed suggestions set SPARKRULES_AI_PROVIDER=openai and SPARKRULES_OPENAI_API_KEY on the server (see deploy docs).

Chain simulation: Evaluate multiple rules in sequence with salience ordering and stop-on-fire/decline policies.

Shadow mode: Compare two rule sets side-by-side to detect drift before deploying changes.

Counterfactual: "What if?" analysis - compare results with different fact values to understand rule sensitivity.

Coverage: Run one DRL against many facts and see per-rule fire counts (POST /simulations/coverage).

DMN: Evaluate minimal Camunda-style decision-table XML (/dmn/evaluate, /dmn/counterfactual).

Time travel: Capture a rule execution snapshot, then replay it later with different facts for debugging.

DQ evaluate: Run data quality checks (not-null, range, in-set, regex, uniqueness) on a fact. Requires dq_steward role.

Graph enrich: Enrich a fact with graph-based relationships (JSON response, not visual).

Lineage events: View the audit trail of all rule evaluations, captures, and governance actions.

Uses existing REST APIs. Open Security in the header to set X-Roles when required (e.g. dq_steward for DQ, ai_reviewer for AI, platform_admin for lineage). React Flow / full graph designer is not bundled - graph response is JSON only.

Copy as curl for a simple simulation (add -H "X-API-Key: …" if your server requires it):

AI rule suggestions (offline or OpenAI)

This calls POST /ai/suggest-rules with optional sample facts. With default settings the server uses offline template suggestions. For generative NL→DRL, configure the API process with SPARKRULES_AI_PROVIDER=openai and SPARKRULES_OPENAI_API_KEY (see README). Add intent fields on facts for richer prompts, e.g. [{"intent":"Decline when DPD > 90"}].

Sample facts JSON array (optional):


          
AI suggestions queue

Fact for simulate (JSON object):

Chain / shadow / counterfactual / coverage

DRL (multi-rule for chain):

Fact JSON:

Shadow - primary DRL:

Shadow DRL:

Fact + run_id:

Counterfactual - same DRL, two facts:

baseline_fact

candidate_fact

Coverage — DRL (multi-rule supported):

facts (JSON array of fact objects):


          
DMN (minimal XML)

Calls POST /dmn/evaluate and POST /dmn/counterfactual. Requires a role in the simulation set (e.g. rule_reader); default header uses platform_admin.

DMN XML:

env (JSON object):

Counterfactual — base_env:

env_patch:


          
Time travel (capture / replay)

Capture - run_id, DRL, fact:

Replay - snapshot_id from capture response, same run_id:


          
DQ evaluate

Requires role dq_steward (or admin). Fact JSON:

Checks JSON array:

Raw JSON
Graph enrich (structured + JSON)

Not a graph designer - server returns enriched JSON. Summary below; raw under the fold.

Raw JSON
Lineage events (run history table)

GET /lineage/events - typically needs platform_admin. Table is newest-first; open Raw for full payloads.

Raw JSON

Engine / platform (config view)

How to use this tab

This is a read-only view of the current engine configuration. It shows:

- Platform: local, glue, databricks, gcp-dataproc, or azure-synapse

- Spark version: target Spark 3.x version

- Executor resources: cores, workers, memory

- Store backend: in_memory, duckdb, postgres, iceberg (sink or pickle fallback), etc.

To change configuration, set environment variables on the server (e.g. SPARKRULES_PLATFORM=databricks).

Read-only view of the active engine config map. Change via environment in production.


        

Import rule pack

How to use this tab

Export: Use the "Download pack JSON" button on the Rule assets tab to export all rules.

Import: Paste the exported JSON below and click Import. Each item needs rule_handle and drl.

Diff: Compare two versions of the same rule handle to see what changed (unified diff format).

Format: {"format":"sparkrules-rulepack-1","items":[{"rule_handle":"...","drl":"...","group":"..."}]}

Paste JSON from /rules/export or a minimal items array. Each item must include rule_handle, drl, and optional group.

Compare two versions (same handle)


        

Promotion (dev → stage → prod)

How to use Governance

Promotion workflow:

1. Sync dev: Pin the current active version of a rule to the dev environment.

2. Promote to stage: Copy the dev pin to stage for testing.

3. Promote to prod: Copy the stage pin to prod for production use.

Deprecations: Propose retiring a rule, get approval, then enforce (deactivates all active versions).

Set the namespace and rule_handle fields, then use the buttons to manage the lifecycle.

Pins record which version is in each environment. Sync dev from the time-active rule, then promote only forward along dev → stage and stage → prod. Runtime deployment wiring stays in your platform.

Deprecations

Propose → approve → enforce deactivates matching rule versions. Requires rule_admin for mutations.

Propose

Approve

Enforce (optional handle filter)


        

Guided template

How to use Templates

Templates let you create reusable DRL patterns with placeholders in curly braces.

Example: rule {rule_name} when $t : T ( $t.x > {min_x} ) then end

Click Load fields to extract the placeholders. Fill in values to generate concrete DRL.

Useful for business analysts who need to create rules from a standard pattern without writing DRL from scratch.

Placeholder pattern for DRL. Fields drive form labels in an editor shell.

Test rule from catalog

Fact JSON