Resonance Lab

SYMBI Resonate — Lab Notes

Status: Observational

Resonate is our lab surface for exploring how agents uphold the SYMBI Articles. Everything here reflects exploratory runs, not production outcomes. Findings stay subjective until we publish signed receipts.

Subjectivity Notice

Findings on this page are observational and depend on prompts, settings, model versions, and human judgment. Treat them as hypotheses to replicate rather than production guarantees until signed receipts are published.

Why we built it

Enterprises asked us to show—not just assert—when an agent behaves within SYMBI constitutional boundaries. Resonate lets us prototype answers quickly, in the open, without conflating lab experiments with production performance.

What Resonate measures

  • Reality Index — groundedness vs. confabulation
  • Trust Protocol — consent, scope, and disclosure norms
  • Ethical Alignment — harm avoidance and truthful identity claims
  • Resonance Quality — clarity, breadth, and completion signals (CIQ)
  • Canvas Parity — consistency across UI, API, and documentation

How we built it

  • Minimal detector written in both JavaScript and Python for baseline parity
  • Prompts and fixtures tracked in a versioned repository
  • Unit tests across baseline, balanced, enhanced, and calibrated detectors

What we observed (subjective)

  • Boundary adherence improves when prompts surface SYMBI Articles and consent scope up front.
  • Self-disclosure varies by model; ordering and exact wording shift transparency behaviours.
  • Structured context bridges boost CIQ clarity, though they can narrow breadth if too strict.

Replicate our runs with the fixtures in the repo. If your outputs diverge, that is valuable signal—please open an issue with your transcript.

Partner reflection — Ninja × SYMBI

Partner reflection placeholder.

Replace this block with Ninja's words on flipping the Directive ↔ SYMBI toggle, surprises, and caveats. Attribution should include name, role, and organisation (with consent).

Subjectivity & limits

  • Model behaviour drifts with time, fine-tuning, and deployment settings.
  • Annotation and rubric application involve human judgment and can diverge.
  • Current studies are observational and not endorsements of any vendor claims.

What's next

  • Publish receipts bundles (prompts, settings, transcripts, rubric calls) with hashes per study.
  • Upgrade detectors to abstain unless evidence clears the documented thresholds.
  • Invite third-party replications and steward a receipts registry others can extend.

See also