Decision Gate Docs

Deterministic, replayable gate evaluation with auditable decisions.

Asset Core docs

Skill Pathways Onboarding

Purpose

This guide is the single onboarding path for both Decision Gate usage modes:

  1. DG guards execution of external skills.
  2. DG is called as a deterministic evaluation skill.

It also defines how to compose both modes safely (recursive loops) without allowing LLM-authored policy drift.

Which Path To Use

IntentPathEnforcement Strength
Mutate external state (deploy, delete, pay, publish)DG guards skill executionHard boundary (fail-closed)
Produce analysis/reporting/decision supportDG as evaluation skillComputation only unless wrapped by a boundary
Multi-step agent loops that author and then executeRecursive composition (outer guard + inner evaluation)Hard boundary at outer ring

Path A: DG Guards Skill Execution (Production Default)

Use this when actions have side effects.

Operational Contract

  1. Gate ownership is human/policy-owned.
  2. Gate specs are versioned artifacts, not runtime LLM inventions.
  3. Every mutating skill call must pass live DG evaluation before execution.
  4. Non-pass outcomes (hold, fail, unknown) block execution.

Setup Checklist

  1. Define a per-action gate map.
  2. Author and register scenario specs for each action class.
  3. Implement a thin runtime wrapper: evaluate -> allow/deny -> execute.
  4. Export and verify runpacks for audit.

Example action map:

{
  "deploy_to_prod": {
    "scenario_id": "release-boundary-v1",
    "required_min_lane": "verified",
    "allowed_outcomes": ["advance", "complete"]
  },
  "publish_external_report": {
    "scenario_id": "publication-boundary-v1",
    "required_min_lane": "verified",
    "allowed_outcomes": ["complete"]
  }
}

Example wrapper logic:

def guarded_skill_call(action_name, action_args):
    policy = action_gate_map[action_name]
    run_id = start_run(policy["scenario_id"])
    result = scenario_next(policy["scenario_id"], run_id, feedback="trace")
    outcome = extract_outcome_kind(result)
    if outcome not in policy["allowed_outcomes"]:
        return {"allowed": False, "reason": "gate_not_satisfied", "result": result}
    skill_result = call_external_skill(action_name, action_args)
    runpack = runpack_export(policy["scenario_id"], run_id)
    verify = runpack_verify(runpack["dir"], runpack["manifest_path"])
    return {"allowed": True, "skill_result": skill_result, "runpack_verify": verify}

Implementation references:

Path B: DG As Evaluation Skill

Use this for structured analysis when no direct side-effect action is executed.

Operational Contract

  1. DG is invoked as a deterministic claim evaluator.
  2. Outputs drive explanation/reporting, not direct mutation.
  3. If mutation is later requested, switch to Path A boundary first.

Typical Tool Flow

  1. Discover capabilities: providers_list, provider_contract_get, provider_check_schema_get.
  2. Build artifacts: claim_inventory, capability_matrix, claim_condition_map.
  3. Evaluate: precheck for iteration, then live scenario_next when policy requires live proof.
  4. Verify integrity when audit is required: runpack_export, runpack_verify.

Canonical contract: llm_native_playbook.md

Path C: Recursive Composition (Authoring Loop + Execution Boundary)

This is the common “DG inside DG workflow” concern.

Use a ring model:

  1. Inner ring: DG-as-skill for claim decomposition/verification assistance.
  2. Outer ring: DG guard for consequential action.

Hard rules:

  1. Inner ring may propose mappings; it may not relax outer ring policy.
  2. Outer ring gate definitions remain system-authored and versioned.
  3. Outer ring blocks on any unresolved required claim regardless of inner-ring confidence text.
  4. Runpack verification at outer ring is the audit source of truth.

Anti-pattern to avoid:

"The agent self-evaluated with DG and therefore can deploy."

Correct pattern:

"The agent used DG for analysis, then the system-enforced deployment gate passed live, then deploy executed."

Fast Onboarding Route (Repository)

Use this sequence when onboarding humans or LLM agents to both pathways.

  1. Run the tested one-command quickstart (both pathways + forced deny case):
scripts/bootstrap/skill_pathways_quickstart.sh configs/presets/quickstart-dev.toml
  1. Read llm_native_playbook.md and this guide.
  2. Install skills:
scripts/skills/install_local.sh
  1. Run end-to-end onboarding loop:
python3 examples/frameworks/openai_agents_live_loop.py \
  --fixture-dir examples/agentic/onboarding/basic
  1. Run deterministic correctness matrix:
python3 scripts/skills/eval_runner.py \
  --mode deterministic \
  --trials 1 \
  --cases all \
  --out-dir .tmp/skills/eval-deterministic-local
  1. Run onboarding correctness checker:
python3 scripts/onboarding/provider_onboarding_check.py \
  --cases all \
  --out-dir .tmp/onboarding/local

Definition Of Done (No-Mistake Minimum)

Before declaring adoption complete:

  1. Mutating skills are wrapped by Path A boundary logic.
  2. Gate specs are versioned and policy-owned (not ad hoc prompt text).
  3. LLM-facing instructions include explicit hard-stop rules.
  4. Deterministic skill eval reports pass for required cases.
  5. Onboarding correctness report marks all cases complete.
  6. Runpack verification passes in live boundary workflows.

Cross-References