Welcome back, vvadmin
Private RAG workspace for grounded Q&A — explore every Vigilant Voices service and product below, or open the Capability Lab from the sidebar.
0
Documents Indexed
Ready for upload
0
Data Records Parsed
Awaiting data
0
Queries Answered
Start chatting
15+
Models in Mesh
All online
Explore services & products

Open any tile for a guided scenario in the Capability Lab — run the demo, then use Ask Your Data with your indexed corpus.

Quick Start

Ingest files, then query through the mesh — pair with Capability Lab scenarios for policy, retrieval, and compliance storylines

1
Upload your documents
Drag & drop PDF, Excel, Word, CSV, or image files into the Upload & Parse panel. Vigilant Voices will extract and chunk the content.
2
Data gets parsed & indexed
The platform extracts structured data, generates semantic embeddings, and stores everything in your private vector database — no data leaves your environment.
3
Ask questions against your data
Use the "Ask Your Data" panel to query the AI using natural language. The mesh retrieves the most relevant chunks from your indexed data and grounds the response in your facts.
Activity Feed

Recent platform events

Demo environment initialized and ready
Just now
Model mesh connected — 15 models available
Just now
Private vector database online — awaiting first upload
Just now
Upload & Parse
Drop your files below. This demo simulates extract/chunk/index in the browser (no real PDF/DOC parsing). Search vectors are stored per session in a small JSON index on the server — you do not need MySQL/Postgres for the demo to work.
Drop Files to Index

Supported: documents, images, code, plus Terraform (.tf), Dockerfile, Compose, env/config, and repo-style files.

Drag & drop files here

or click anywhere in this area to browse

PDF TF Dockerfile Compose XLSX DOCX CSV PNG JPG

Any file type allowed (repos, infra, configs). Extensionless Dockerfile is supported.

Upload diagnostics Idle
Accepted: 0 · Rejected: 0
Guidance: uploads are capped by the smaller of the demo limit and the server PHP limits. This host is currently enforcing about 16 MiB per file and 24 MiB per request.
Storage: …
Last API response: none yet.
Infra Visualizer Docker/Terraform
Upload Terraform or Docker files to generate an infra flow preview.
GitHub Repo Ingest (Public)
Paste a public GitHub repo URL to import repository files and index them into this demo workspace.
All repository file extensions are accepted for demo indexing.
Parse Settings

Configure extraction behaviour

Chunk size
Index Status

Private vector database

Documents 0
Chunks indexed 0
Vector docs (backend) 0
Vector chunks (backend) 0
Last backend ingest
Storage: Checking…
Status Online
All indexed data is stored exclusively within your private environment. No data is shared with external model providers.
Parsed Database
All extracted records from your uploaded files — searchable, sortable, and queryable by the AI mesh.
Indexed Records

Records extracted and stored from your documents

# File Name Type Extracted Field Value / Content Preview Chunks Indexed Actions
No records yet — upload files in the Upload & Parse panel to get started.
Data Points
Live metrics extracted and tracked from your indexed documents — automatically updated as new data is parsed.
Total Documents
0
Files indexed in mesh
Chunks Indexed
0
512-token semantic chunks
AI Queries
0
Questions answered this session
Avg. Retrieval
~18ms
Vector search latency
Document Type Breakdown

Distribution of indexed file types

Upload files to see breakdown
Model Routing Activity

Which models handled your queries

Ask questions to see routing data
RAG Pipeline Health

Status of each stage in the retrieval-augmented generation pipeline

Document Parser
Operational
Embedding Engine
Operational
Vector Store
Online
Model Mesh Router
15 Models Active
Ask Your Data
Query your indexed documents using natural language. The AI mesh retrieves the most relevant content from your private data and grounds its response in your facts — not generic training data.
Data Q&A

Grounded responses from your indexed documents

Route via mesh to:
Vector DB: 0 docs
0 chunks indexed
RAG: Retrieval-Augmented
VV
Vigilant Voices AI Mesh is ready.

I'm connected to your private vector database. Upload documents using Upload & Parse, then stress-test retrieval and governance storylines from the Capability Lab.

Until you index files, ask about the mesh, RAG, or how each product scenario maps to your SOC and compliance workflows.
Just now · Auto (Mesh)
Try asking:
How This Works

RAG pipeline — your data, your AI

1
Your question is converted into a semantic vector embedding.
2
The vector store finds the most semantically similar chunks from your indexed documents.
3
Those chunks are injected as context into the prompt sent to the selected model.
4
The AI generates a response grounded in your data — not hallucinated from generic training.
Sources Used

Chunks retrieved for last query

Send a message to see which document chunks were retrieved.
Capability Lab
Pick a service or product — each scenario ties to how Vigilant Voices runs in production.
Demo Scenario

Choose a capability to load scenario details.

What this delivers
Demo Checks

Suggested validations for this capability