AI Advisory Plane
Three role-specific agents on one governed AI Advisory Plane
Vannarho RaaS runs Config Doctor, Control Plane, and Explainable Results agents on one governed architecture aligned to the run lifecycle, not three separate AI stacks.
1. Config Doctor Agent
Onboarding and pre-execution readiness
Customer experience: during onboarding and before launching an analytic run, the user gets a clear go/no-go view with specific issues, required client inputs, and recommended repair actions.
What it does before launch
- Determines if a submission is runnable
- Evaluates bind readiness, scope/config compatibility, missing inputs, waivers/approvals, unsupported seams
- Produces case, findings/reports, client questions, repair plans, predictive hints, evidence bundle
- Finalizes gate outcome as RUNNABLE, RUNNABLE_WITH_APPROVED_WARNINGS, or blocked outcomes
Governance boundary
- Advisory-only for governance decisions; can draft fixes but cannot self-approve
- No run request allowed unless Config Doctor is FINALISED with
launch_allowed=true
Retrieval defaults: elaborated embeddings, known good/bad configs, and hashed-tfidf retrieval in core Config Doctor service logic.
Advisor CLI: sentence-transformers with --embedding-model all-MiniLM-L6-v2.
LLM task default: gemini.
2. Control Plane Agent
Run health and operational interpretation
Customer experience: during incidents or delays, users get a plain-language explanation of what happened, where it failed, and the safest next action.
Scope and interpretation role
- Covers execution and blocked-serving phases (Run Health Analyst)
- Interprets dispatch, job, load, alert, and serving lifecycle state
- Drafts safe operational next steps and support actions
Authority and limits
- Uses typed control-plane and evidence records as authority
- Read-plus-draft only
- No bypass of approvals or typed workflows
3. Explainable Results Agent
Post-execution result interpretation
Customer experience: after results are loaded, users can ask what changed and why and receive cited, regulator-safe explanations linked to evidence.
Signals and narrative sources
- Uses governed BigQuery facts, serving/comparison views, and evidence references
- Uses BigQuery ML for deterministic signals
- Uses Vertex AI for bounded narrative and Q/A
Deterministic truth remains fixed
- Explains, compares, and drafts commentary
- Does not alter deterministic result truth
Governed AI agents, deterministic risk truth
The advisory plane can diagnose, interpret, and explain. Deterministic pricing, approvals, and publication remain in governed workflows.