Open-source AI explanations without loading Vestige Index.
UTXO AI is the explanation bridge for UTXO Guard. The extension tries local open-source models first, then calls a separate Cloudflare Worker only when a user asks for an explanation. The Vestige Index home remains static, light and free from model weight.
The hosted bridge can run Cloudflare Workers AI with Llama when the binding is available. If quota, region or model access is unavailable, UTXO Guard still returns a safe read-only explanation.
Cloudflare Workers AI
Hosted Llama bridge for users who do not want to install a local model.
Ollama
Recommended local runtime. Install, pull llama3.1, and UTXO Guard will detect it automatically.
LM Studio
Desktop local model runtime with OpenAI-compatible local server support.
llama.cpp
Lightweight open-source inference runtime for advanced local deployments.
Recommended command path
Local Llama in three steps.
# 1. Install Ollama from https://ollama.com/download
# 2. Pull a local model
ollama pull llama3.1
# 3. Keep the local endpoint running
ollama serveUTXO Guard checks 127.0.0.1:11434 first. If it cannot reach a local model, it calls the Worker bridge. If the bridge is unavailable, it returns a safe fallback explanation instead of failing silently.
Guardrails
