UTXO AI Brain

Open-source AI explanations without loading Vestige Index.

UTXO AI is the explanation bridge for UTXO Guard. The extension tries local open-source models first, then calls a separate Cloudflare Worker only when a user asks for an explanation. The Vestige Index home remains static, light and free from model weight.

Runtime order
1Extension / SDK / CDN
2Local model: Ollama / LM Studio / llama.cpp
3Cloudflare Worker bridge
4Safe deterministic fallback

The hosted bridge can run Cloudflare Workers AI with Llama when the binding is available. If quota, region or model access is unavailable, UTXO Guard still returns a safe read-only explanation.

Bridge ready

Cloudflare Workers AI

Hosted Llama bridge for users who do not want to install a local model.

Local first

Ollama

Recommended local runtime. Install, pull llama3.1, and UTXO Guard will detect it automatically.

Local optional

LM Studio

Desktop local model runtime with OpenAI-compatible local server support.

Advanced

llama.cpp

Lightweight open-source inference runtime for advanced local deployments.

Recommended command path

Local Llama in three steps.

# 1. Install Ollama from https://ollama.com/download
# 2. Pull a local model
ollama pull llama3.1

# 3. Keep the local endpoint running
ollama serve

UTXO Guard checks 127.0.0.1:11434 first. If it cannot reach a local model, it calls the Worker bridge. If the bridge is unavailable, it returns a safe fallback explanation instead of failing silently.

Guardrails

The brain explains risk, not hype.

No seed phrases or private keys.
No signing, execution, custody or transaction broadcasting.
No invented balances, routes, scores or contract facts.
If exact data is missing, the answer must say so.
The extension always remains read-only before wallet confirmation.