Shield Assistant
Technical White Paper
Shield NOYB is a browser-first AI chat application engineered around a single
architectural commitment: the server must never see the content of user queries or model
responses. The system achieves this through the mutual reinforcement of three design pillars.
First, client-side data sovereignty: all conversation history, memory embeddings, retrieval
logic, document parsing, persona management, and profile learning reside exclusively in the
user’s browser, backed by IndexedDB and an optional sqlite-vec WASM vector index. The
server-side database stores only user accounts and authentication tokens. Second, end-to-end
encrypted transport: when the user selects the secure_ehbp transport mode, all HTTP request
and response bodies between the browser and the inference enclave are encrypted using the
Encrypted HTTP Body Protocol (EHBP), which implements HPKE (RFC 9180) hybrid public key
encryption with X25519, HKDF-SHA256, and AES-256-GCM. The Phoenix backend proxy sees only
encrypted ciphertext, routing metadata, and HTTP headers. Third, verifiable trust: a
three-axis verification system lets the user confirm client bundle integrity (via IPFS content
addressing), proxy policy integrity (via Ed25519 signed or RISC Zero zero-knowledge receipts),
and enclave identity (via remote attestation). This document provides a comprehensive
technical analysis of each layer, the trust assumptions underlying each guarantee, the residual
risks that remain, and the controls implemented to mitigate them. It is intended to serve as a
standalone reference accessible from first principles, without requiring prior familiarity
with the codebase or its dependency ecosystem.
Large language model inference is fundamentally incompatible with encrypted input. A transformer model operates on tokenized plaintext: the attention mechanism computes dot-product similarities across token embedding vectors, and the feedforward layers apply learned weight matrices to those vectors. Every operation – tokenization, embedding lookup, self-attention, layer normalization, softmax sampling – requires access to the unencrypted semantic content of the input. The model cannot attend to ciphertext, cannot compute meaningful attention scores over encrypted token embeddings, and cannot produce coherent output from data it cannot read.
This is not a limitation of current implementations – it is a structural property of how neural language models work. Homomorphic encryption (HE) could theoretically permit computation on encrypted data, but the overhead for operations at the scale of LLM inference (billions of parameters, thousands of tokens, multiple attention heads per layer) remains orders of magnitude too costly for production use. Fully homomorphic LLM inference is an active research area, but no practical system exists today.
The consequence is stark: wherever LLM inference occurs, the user’s complete query and the model’s complete response exist in plaintext. In conventional architectures, inference runs on servers controlled by the provider. This gives the operator – along with any infrastructure compromise, insider threat, lawful intercept mechanism, or data pipeline (logging, telemetry, fine-tuning, debugging) – full access to every conversation. The attack surface is not a single access point but an accumulation of access vectors that are difficult to enumerate and impossible for the user to audit externally.
The architectural response is not to eliminate plaintext (which is impossible) but to isolate the environment where plaintext exists. Shield NOYB runs inference inside a hardware-protected enclave – a Trusted Execution Environment (TEE) whose memory is encrypted by the CPU and inaccessible to the host operating system, hypervisor, or server operator. Plaintext exists only inside the enclave boundary, and the enclave’s identity is cryptographically verified by the user’s browser before any data is transmitted.
Users have no mechanism to verify that the application code running in their browser is the audited version, that the server is following its stated privacy policy, or that the inference environment is what it claims to be. Privacy is asserted by the provider and accepted on faith. There is no cryptographic proof, no independent check, and no way for the user to detect a silent policy change, a compromised server, or a substituted inference endpoint.
Shield NOYB was designed to address both of these failure points:
Failure point 1 (plaintext at inference) is addressed by running the LLM inside a hardware-protected enclave. Plaintext necessarily exists during inference, but the enclave isolates it: even the server operator cannot observe the computation. The enclave’s identity is verified by the user’s browser via remote attestation before any data is sent.
Failure point 2 (unverifiable privacy) is addressed by a three-axis verification system. The user can independently verify: (a) that the client code matches a pinned IPFS reference, (b) that the proxy server is operating under the audited privacy policy via cryptographic receipts, and (c) that the inference enclave is running the attested code via remote attestation.
By moving all user data and intelligence to the browser, encrypting all communication end-to-end between the browser and the remotely attested inference enclave, and providing cryptographic verification of each trust boundary, the system reduces the server’s role to authentication and opaque request forwarding.
Zero server-side context storage. No conversation, message, embedding, or prompt data is ever written to server-side storage, logs, or caches.
End-to-end encryption with server bypass. The Phoenix backend must be unable to read the plaintext of any user query or model response, even if the operator is curious or the server is compromised.
Client-side intelligence. All RAG pipeline operations – chunking, embedding, retrieval, prompt assembly, document parsing, persona management, and profile learning – must execute entirely in the browser.
Verifiable trust. The user must be able to verify, on demand, the integrity of the client code, the proxy policy, and the inference enclave identity, without trusting any single party.
Graceful degradation. When secure transport, WASM vector indexes, or verification backends are unavailable, the system must degrade to a functional (if less private) mode with clear user notification.
Minimal attack surface. The server should contain the minimum code, state, and dependencies needed to fulfill its role as an authentication gateway and opaque proxy.
This document covers the complete architecture of Shield NOYB as implemented in the codebase at the time of writing (March 2026). It addresses system topology, data flows, encryption protocols, storage architecture, client-side intelligence pipelines, verification systems, authentication, defensive controls, internationalization, mobile support, and residual risks. It references specific source code paths throughout for verifiability.
The system consists of three principals:
Browser client – A single-page JavaScript application served by Phoenix, running entirely in the user’s browser after initial page load. This is where all user data resides, all intelligence operations execute, and all encryption/decryption occurs.
Phoenix backend – An Elixir/Phoenix 1.7.15 web server responsible for user authentication, proxy token issuance, verification receipt generation, and opaque request forwarding to the upstream inference provider. It uses SQLite via Ecto and runs behind the Bandit HTTP adapter.
Tinfoil inference enclave – A remote attested execution environment operated by Tinfoil that runs the AI model and processes encrypted requests. The enclave’s identity is verifiable via remote attestation, and its HPKE public key is bound to the attested code measurements.
┌──────────────────────────────────────────────────────────────────────────┐
│ BROWSER CLIENT │
│ │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────┐ ┌───────────┐ │
│ │ IndexedDB │ │ sqlite-vec │ │ Local Vault │ │ Document │ │
│ │ (persisted) │ │ WASM (mem) │ │ AES-256-GCM │ │ Parser │ │
│ └──────┬───────┘ └──────┬───────┘ └───────┬────────┘ └─────┬─────┘ │
│ │ │ │ │ │
│ ┌──────▼─────────────────▼───────────────────▼─────────────────▼─────┐ │
│ │ app.js (Orchestrator) │ │
│ │ - Prompt assembly - Persona system - Profile learning │ │
│ │ - RAG pipeline - Conversation UI - Data export │ │
│ └───────────────────────────┬────────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────────▼────────────────────────────────────────┐ │
│ │ tinfoil_secure_transport.js │ │
│ │ - EHBP encryption/decryption via SecureClient │ │
│ │ - Attestation verification - HPKE key exchange │ │
│ └───────────────────────────┬────────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────────▼────────────────────────────────────────┐ │
│ │ proxy_session.js + proxy_verification.js │ │
│ │ - Token lifecycle - Receipt verification │ │
│ │ - Authenticated fetch - Bundle CID checks │ │
│ └───────────────────────────┬────────────────────────────────────────┘ │
│ │ │
└──────────────────────────────┼──────────────────────────────────────────┘
│
│ HTTPS (TLS)
│ Body: EHBP ciphertext
│ Headers: proxy token, enclave URL, EHBP key
│
┌──────────▼──────────┐
│ PHOENIX BACKEND │
│ │
│ - Validate token │
│ - Check EHBP hdr │
│ - Inject API key │
│ - Forward as-is │
│ - Stream response │
│ - Issue receipts │
│ │
│ Body: opaque bytes │
│ Cannot decrypt │
└──────────┬──────────┘
│
│ HTTPS (TLS)
│ Body: EHBP ciphertext
│ Headers: API key, EHBP key
│
┌──────────▼──────────┐
│ TINFOIL ENCLAVE │
│ │
│ - EHBP decrypts │
│ - Processes query │
│ - Generates reply │
│ - EHBP encrypts │
│ - Streams back │
│ │
└──────────┬──────────┘
│
│ Encrypted response streams back
│ through Phoenix to browser
│
┌──────────▼──────────┐
│ BROWSER │
│ │
│ SecureClient │
│ decrypts response │
│ Tokens rendered │
│ Response saved to │
│ IndexedDB locally │
└─────────────────────┘
The complete data flow for a single chat interaction in secure_ehbp mode:
memory_store.js (client-side only). chunkText() in llm_client.js: 800-character chunks with
120-character overlap. nomic-embed-text) via the encrypted transport. Fallback: local deterministic hash
embedding if remote embedding fails. sqlite-vec mode, this uses the WASM vec0 virtual table.
In indexeddb mode, it performs a brute-force cosine similarity scan. Authorization,
X-Tinfoil-Enclave-Url, Ehbp-Encapsulated-Key).
At no point in this pipeline does any plaintext user content leave the browser unencrypted
(in secure_ehbp mode) or exist in server-side storage.
| Component | Location | Language | Role |
|---|---|---|---|
app.js |
assets/js/private_assistant/app.js |
JavaScript | Main UI controller, prompt assembly, persona system, profile learning, state management |
tinfoil_secure_transport.js |
assets/js/private_assistant/tinfoil_secure_transport.js |
JavaScript | EHBP encrypted transport via Tinfoil SecureClient |
llm_client.js |
assets/js/private_assistant/llm_client.js |
JavaScript | LLM API abstraction, legacy transport, text chunking, local embedding fallback |
memory_store.js |
assets/js/private_assistant/memory_store.js |
JavaScript | IndexedDB and sqlite-vec storage for conversations, messages, and memory |
local_vault.js |
assets/js/private_assistant/local_vault.js |
JavaScript | Optional AES-GCM at-rest encryption for stored records |
proxy_session.js |
assets/js/private_assistant/proxy_session.js |
JavaScript | Proxy token lifecycle and authenticated fetch |
document_parser.js |
assets/js/private_assistant/document_parser.js |
JavaScript | Browser-side document parsing (PDF/DOCX/XLSX/PPTX), no server upload |
proxy_verification.js |
assets/js/private_assistant/proxy_verification.js |
JavaScript | Receipt verification, bundle verification, IPFS CID checks |
risc0_zk_receipt_verifier.js |
assets/js/private_assistant/risc0_zk_receipt_verifier.js |
JavaScript | Bundled RISC Zero WASM runtime for local ZK receipt verification |
i18n.js |
assets/js/private_assistant/i18n.js |
JavaScript |
Client-side i18n with 4 locales (en, ja, id, zh) using data-i18n binding |
TinfoilProxyController |
lib/private_assistant_web/controllers/tinfoil_proxy_controller.ex |
Elixir | Proxy route handlers for secure, debug, and attestation paths |
ProxyVerificationController |
lib/private_assistant_web/controllers/proxy_verification_controller.ex |
Elixir |
Receipt creation endpoint (POST /api/proxy/verify/session) |
Proxy.Client |
lib/private_assistant/proxy/client.ex |
Elixir | HTTP forwarding, header management, EHBP enforcement, streaming |
Proxy.Token |
lib/private_assistant/proxy/token.ex |
Elixir | Proxy token issuance and verification via Phoenix.Token |
Proxy.Config |
lib/private_assistant/proxy/config.ex |
Elixir | Runtime configuration for all proxy-related settings |
Proxy.Verification |
lib/private_assistant/proxy/verification.ex |
Elixir | Receipt orchestrator, dispatches to signed or ZK backend |
Proxy.VerificationPayload |
lib/private_assistant/proxy/verification_payload.ex |
Elixir | Canonical receipt payload builder |
Proxy.VerificationSigned |
lib/private_assistant/proxy/verification_signed.ex |
Elixir | Ed25519 signed receipt backend |
Proxy.VerificationZk |
lib/private_assistant/proxy/verification_zk.ex |
Elixir | ZK receipt backend adapter (calls Rust sidecar) |
Proxy.VerificationManifest |
lib/private_assistant/proxy/verification_manifest.ex |
Elixir | Manifest generation for client-side verification |
RequireProxyToken |
lib/private_assistant_web/plugs/require_proxy_token.ex |
Elixir | Plug that gates proxy endpoints behind a valid bearer token |
CacheBodyReader |
lib/private_assistant_web/cache_body_reader.ex |
Elixir | Caches raw request body for proxy forwarding |
Proxy.ForwardingChallenge |
lib/private_assistant/proxy/forwarding_challenge.ex |
Elixir | Forwarding challenge orchestrator: constructs test request, runs through forwarding path, attaches proof to session receipt |
Proxy.ForwardingChallengeZk |
lib/private_assistant/proxy/forwarding_challenge_zk.ex |
Elixir | Forwarding challenge ZK adapter (calls Rust sidecar for challenge proof generation and verification) |
Native.RuntimeNif |
lib/private_assistant/native/runtime_nif.ex |
Elixir | Fault-tolerant Elixir wrapper for the dormant Rust NIF scaffold used for verification-time challenge measurement |
Proxy.NifChallengeSigned |
lib/private_assistant/proxy/nif_challenge_signed.ex |
Elixir | Signed fallback artifact for the NIF-measured verification challenge |
Proxy.NifChallengeZk |
lib/private_assistant/proxy/nif_challenge_zk.ex |
Elixir | ZK adapter for the NIF-measured verification challenge |
pa_zk_receipt_prover |
native/pa_zk_receipt_prover/ |
Rust | Sidecar for ZK proof generation (RISC Zero zkVM): session receipts and forwarding challenges |
forwarding_challenge_guest |
native/pa_zk_receipt_prover/methods/forwarding_challenge_guest/ |
Rust | RISC Zero guest program for forwarding challenge proofs |
pa_runtime_nif |
native/pa_runtime_nif/ |
Rust | Rustler NIF scaffold for in-process verification challenge measurement |
nif_challenge_guest |
native/pa_zk_receipt_prover/methods/nif_challenge_guest/ |
Rust | RISC Zero guest program for the NIF-measured no-decrypt challenge proof |
browser_verifier |
native/pa_zk_receipt_prover/browser_verifier/ |
Rust | WASM verifier source for client-side ZK receipt verification |
Local-first memory. No server-side context database. All user-generated content and derived data (embeddings, chunks, profiles, personas) reside in the browser.
Controlled capabilities. The server can reject or limit proxy use via token validation, transport mode restrictions, and API key gating, but it cannot inspect the data it proxies.
Explicit safety gates. Debug mode is hard-disabled in production at boot time. Transport mode degradation is visible to the user.
Fault tolerance. The client falls back gracefully when secure transport, sqlite-vec WASM, ZK verification backends, or remote embedding services are unavailable.
Minimal server footprint. The server contains only authentication tables, no content storage, and no state beyond the request lifecycle.
Verifiable boundaries. Each trust boundary (client, proxy, enclave) can be independently verified by the user through distinct mechanisms (IPFS CID, receipts, attestation).
| Principal | Trust level | Justification |
|---|---|---|
| User’s browser | Trusted by the user | The user controls their own device and browser. The browser is the only location where plaintext user data resides. |
| Phoenix backend operator | Trusted for auth policy; untrusted for content |
The operator runs authentication and proxy routing. The system is designed so that even a curious or compromised operator cannot read query/response content when secure_ehbp is active. |
| Tinfoil enclave | Trusted for computation | The enclave processes plaintext queries after EHBP decryption. Trust is established via remote attestation. |
| Network observers | Untrusted | All traffic is protected by TLS at the transport layer and EHBP at the application layer. |
| ZK/Signed receipt backend | Trusted for policy attestation | The receipt backend attests to the proxy’s operating policy. Trust is established via cryptographic signatures (Ed25519) or zero-knowledge proofs (RISC Zero). |
Even with secure_ehbp active and functioning correctly, the Phoenix backend has visibility
into the following metadata:
/v1/chat/completions vs.
/v1/embeddings), revealing operation type. X-Tinfoil-Enclave-Url header. Reveals the enclave identity but not content. Ehbp-Encapsulated-Key header. The HPKE encapsulated key (64 hex characters for
X25519). Required for decryption at the enclave but not useful to the proxy. The system identifies three distinct trust boundaries that must each be independently verifiable for the user to have confidence in the privacy guarantee:
┌───────────────────────────────────────────┐
│ USER'S TRUST DECISION │
└───────────────────┬───────────────────────┘
│
┌───────────────────▼───────────────────────┐
│ THREE VERIFICATION AXES │
│ │
┌───────┴───────┐ ┌─────────┴─────────┐ ┌─────────┴──────────┐
│ AXIS 1: │ │ AXIS 2: │ │ AXIS 3: │
│ CLIENT │ │ PROXY │ │ ENCLAVE │
│ │ │ │ │ │
│ Question: │ │ Question: │ │ Question: │
│ Is the code │ │ Is the proxy │ │ Is the enclave │
│ I'm running │ │ operating under │ │ running the │
│ the audited │ │ the audited │ │ attested code? │
│ version? │ │ policy? │ │ │
│ │ │ │ │ │
│ Mechanism: │ │ Mechanism: │ │ Mechanism: │
│ IPFS CID │ │ Ed25519 signed │ │ Remote │
│ comparison │ │ receipt or │ │ attestation via │
│ of app.js + │ │ RISC Zero ZK │ │ Tinfoil │
│ app.css vs │ │ proof receipt │ │ SecureClient │
│ pinned │ │ │ │ │
│ bundle │ │ Trust anchor: │ │ Trust anchor: │
│ │ │ verification- │ │ GitHub code │
│ Trust anchor:│ │ manifest.json │ │ digest + runtime │
│ IPFS content │ │ (bundled with │ │ attestation │
│ address │ │ client) │ │ document │
└───────────────┘ └───────────────────┘ └────────────────────┘
Axis 1: Client integrity. The browser compares a SHA-256 hash of the locally served
app.js and app.css against the hash of the same files retrieved from the IPFS gateway
using the pinned CID. A match confirms that the served code is identical to the pinned
version. A mismatch may indicate server-side tampering.
Axis 2: Proxy policy integrity. The browser requests a verification receipt from the Phoenix backend, which attests that the proxy is operating under a specific audited policy (EHBP required, client authorization not forwarded, strict header allowlist, enclave URL resolved). The receipt is bound to the current session via a client-generated nonce and verified locally against a pinned manifest. Two receipt backends are supported: Ed25519 signed receipts and RISC Zero ZK proof receipts.
Axis 3: Enclave integrity. The Tinfoil SecureClient verifies the enclave’s remote attestation document before establishing the EHBP encryption channel. This includes verifying the code measurement against the expected digest from GitHub and verifying the runtime attestation from the enclave itself.
Composed trust state. The verification panel in the UI displays all three axes. The overall badge shows “Verified Private” only when all three axes are green. Partial states are displayed with specific messages:
| State | Meaning |
|---|---|
| Verified Private | All three axes verified |
| Check Shield | Enclave OK + trust enabled, but client/proxy not yet checked |
| Client Check Needed | Enclave and proxy OK, client CID mismatch or not checked |
| Proxy Check Needed | Enclave and client OK, proxy receipt missing or expired |
| Full Check Needed | Client and proxy both need verification |
| Unverified | No verification performed or enclave attestation failed |
EHBP (Encrypted HTTP Body Protocol) encrypts HTTP message bodies end-to-end while leaving HTTP headers in the clear for routing. This allows encrypted payloads to transit proxies unchanged while maintaining all standard HTTP semantics (status codes, content negotiation, streaming).
The protocol uses HPKE (Hybrid Public Key Encryption, RFC 9180) with the following parameters:
| Parameter | Value | Specification |
|---|---|---|
| KEM | X25519_HKDF_SHA256 | DHKEM(X25519, HKDF-SHA256) |
| KDF | HKDF_SHA256 | HKDF-SHA256 |
| AEAD | AES_256_GCM | AES-256-GCM |
These parameters provide 128-bit classical security for key exchange and 256-bit key strength for symmetric encryption.
Before any data is encrypted, the browser’s SecureClient performs a four-step verification sequence:
Step 1: Fetch attestation bundle. The client requests the enclave’s attestation document
via GET /api/proxy/attestation. This request is proxied by Phoenix to the Tinfoil
attestation service at the URL configured in TINFOIL_ATTESTATION_URL (default:
https://atc.tinfoil.sh/attestation).
Step 2: Verify enclave identity. The SecureClient verifies the attestation document against the expected enclave measurements:
Step 3: Verify HPKE public key. The SecureClient fetches the enclave’s HPKE public key
from /.well-known/hpke-keys (via the proxy) and verifies it is bound to the attested
enclave. This binding ensures that the public key used for encryption belongs to the verified
enclave, not to an impostor.
Step 4: Client-side verification check. After client.ready() completes, the
application calls client.getVerificationDocument() and checks that securityVerified is
true. If any verification step failed, ensureClient() throws an error with a diagnostic
message identifying the specific failed step. No data is sent until verification passes.
For each request with a body (POST, PUT, PATCH):
Ehbp-Encapsulated-Key request
header (lowercase hex, 64 characters for X25519). Ehbp-Response-Nonce
response header (lowercase hex, 64 characters). concat(encapsulated_key, response_nonce) "ehbp response"
The Phoenix proxy receives the encrypted request, reads the full body into memory via the
CacheBodyReader, and forwards it to the enclave. During this transit:
conn.assigns[:raw_body]) is EHBP ciphertext. It consists of
length-prefixed AES-256-GCM encrypted chunks. It is not valid JSON and cannot be parsed
or read by the server. Ehbp-Encapsulated-Key header is forwarded to the enclave (it is not in the
@hop_by_hop_headers reject list). Ehbp-Response-Nonce header in the upstream response is forwarded back to the browser
via put_upstream_headers/2. content-type: application/json on the upstream request. This describes
the content type after decryption at the enclave (the enclave’s EHBP middleware handles
decryption before the application layer sees the content-type). Finch.stream/4 and forwarded via
Plug.Conn.chunk/2. Each chunk is EHBP ciphertext. To prevent unencrypted plaintext from being accidentally or maliciously routed through the secure path, the proxy enforces the presence of the EHBP encryption header:
# lib/private_assistant/proxy/client.ex
defp require_ehbp_header(_conn, mode, _body) when mode != :secure, do: :ok
defp require_ehbp_header(_conn, :secure, body) when byte_size(body) == 0, do: :ok
defp require_ehbp_header(conn, :secure, _body) do
case get_header(conn, "ehbp-encapsulated-key") do
nil ->
{:error, :missing_ehbp_header}
key ->
if valid_ehbp_header?(key) do
:ok
else
{:error, :invalid_ehbp_header}
end
end
end
defp valid_ehbp_header?(key) when is_binary(key) do
String.match?(key, ~r/\A[0-9a-fA-F]{64,}\z/)
end
This check runs after reading the body but before forwarding. If a request to the
/api/proxy/secure/* path carries a body but lacks a valid Ehbp-Encapsulated-Key header,
the proxy returns 400 missing_ehbp_encapsulated_key or 400 invalid_ehbp_encapsulated_key
and does not forward the request.
This prevents the following scenarios:
Bodyless requests (GET, HEAD, DELETE, OPTIONS) and non-secure modes (:debug) are exempt
from this check.
All user-generated content and derived data is stored exclusively in the browser using two complementary backends:
IndexedDB (IndexedDbMemoryStore in memory_store.js):
private_assistant_local_v1 conversations, messages, memories, kv conversations: titles, timestamps, metadata, optional vault encryption. messages: role, content, conversation association, timestamps, optional vault encryption. memories: text chunks, embedding vectors, conversation association, timestamps, optional
vault encryption. kv: user settings (proxy mode, model selections, active conversation ID, personas, user
profile).
sqlite-vec WASM (SqliteVecMemoryStore in memory_store.js):
vec0 virtual table extension for vector similarity search. SqliteVecMemoryStore delegates all persistence operations to an underlying
IndexedDbMemoryStore instance and adds a WASM-based vector index on top. cdn.jsdelivr.net/npm/sqlite-vec-wasm-demo@latest/sqlite3.mjs. Both backends are selected at runtime by the user via the Memory Backend dropdown in the UI.
| Data type | Storage location | Server sees it? | Encrypted at rest? |
|---|---|---|---|
| Conversation metadata (title, timestamps) |
IndexedDB conversations |
No | Optional (vault) |
| Message content (role, text) |
IndexedDB messages |
No | Optional (vault) |
| Memory chunks (text fragments) |
IndexedDB memories |
No | Optional (vault) |
| Embedding vectors |
IndexedDB memories |
No | Optional (vault) |
| Vector search index | sqlite-vec WASM (in-memory) | No | No (volatile) |
| User personas (custom instructions) |
IndexedDB kv (personas) |
No | No |
| User profile (learned traits) |
IndexedDB kv (user_profile) |
No | No |
| Active persona selection |
IndexedDB kv (active_persona_id) |
No | No |
| Transport mode preference |
localStorage |
No | No |
| Memory backend preference |
localStorage |
No | No |
| Vault salt |
localStorage |
No | No (salt is not secret) |
| Profile learning toggle |
localStorage (pa_profile_learning) |
No | No |
| Exported conversations | Downloaded file (user’s device) | No | No |
The LocalVault class (local_vault.js) provides optional passphrase-based encryption for
stored records using the Web Crypto API:
Key derivation:
localStorage Encryption:
Storage format:
{enc: 1, iv: <base64>, blob: <base64>} {enc: 0, data: <plaintext object>} Behavior:
Protection scope:
Users can export their conversations as a JSON file:
shield_conversations_YYYY-MM-DD.json. {
"exportedAt": "ISO-8601 timestamp",
"conversationCount": 42,
"conversations": [
{
"id": "conversation-uuid",
"title": "Conversation title",
"createdAt": "ISO-8601",
"updatedAt": "ISO-8601",
"messages": [
{
"id": "message-uuid",
"role": "user|assistant|system",
"content": "Message text",
"createdAt": "ISO-8601"
}
]
}
]
}
All intelligence operations – RAG, document parsing, persona management, and profile learning – execute entirely in the browser. The server is never involved in any of these operations beyond providing the encrypted transport channel for embedding requests.
The Retrieval-Augmented Generation pipeline runs entirely client-side, implemented across
app.js, llm_client.js, and memory_store.js:
┌───────────────────────────────────────────────────────────────────┐
│ CLIENT-SIDE RAG PIPELINE │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ 1. CHUNK │───▶│ 2. EMBED │───▶│ 3. STORE │───▶│ 4. RETRIEVE│ │
│ │ │ │ │ │ │ │ │ │
│ │ chunkText│ │ Enclave │ │ IndexedDB│ │ Similarity │ │
│ │ 800 char │ │ endpoint │ │ + vec0 │ │ search │ │
│ │ 120 over │ │ (EHBP) │ │ (WASM) │ │ (top-K) │ │
│ │ │ │ fallback:│ │ │ │ │ │
│ │ │ │ local │ │ │ │ │ │
│ │ │ │ hash emb │ │ │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────┬─────┘ │
│ │ │
│ ┌──────────────────────────────────────────────────────▼──────┐ │
│ │ 5. PROMPT ASSEMBLY │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────────────────┐ │ │
│ │ │ System message: │ │ │
│ │ │ - Persona instruction (from active persona) │ │ │
│ │ │ - User profile (if profile learning enabled) │ │ │
│ │ │ - "Relevant local context:" + top-K chunks │ │ │
│ │ │ - Document text (if document attached) │ │ │
│ │ ├─────────────────────────────────────────────────────┤ │ │
│ │ │ Message history: │ │ │
│ │ │ - Last 20 messages of conversation │ │ │
│ │ ├─────────────────────────────────────────────────────┤ │ │
│ │ │ Current user message │ │ │
│ │ └─────────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────┬───────────────────────────┘ │
│ │ │
│ ▼ │
│ EHBP encrypt and send │
│ to enclave via proxy │
└──────────────────────────────────────────────────────────────────┘
chunkText() in llm_client.js splits text into overlapping chunks:
position = i * (chunkSize - overlap) and extends chunkSize characters forward. Each chunk is sent to the enclave’s embedding endpoint for vector representation:
POST /api/proxy/secure/v1/embeddings with
the configured embedding model (default: nomic-embed-text). The request body is EHBP-
encrypted. The enclave returns an embedding vector. When the user sends a new message:
Two retrieval backends are available:
vec0 virtual table for vector similarity search.
Queries are expressed as SQL against the in-memory SQLite database. This provides indexed
search with sub-linear query time for large memory stores. memories object store. Linear in the number of stored memories. This is
the always-available fallback.
The top-K results (default K=6, configurable via DEFAULT_RETRIEVAL_TOP_K) are returned as
text chunks for prompt injection.
The final prompt sent to the enclave is assembled in app.js:
"Relevant local context:\n<chunk1>\n<chunk2>\n...".
The document_parser.js module provides client-side document parsing for multiple file
formats. The server never receives file contents.
| Format | Parser | Loading strategy |
|---|---|---|
.txt, .md, .csv, .html, .htm, .asciidoc |
Native browser APIs | No external dependency |
.pdf |
pdfjs-dist v4.9.155 | Lazy-loaded from cdn.jsdelivr.net on first use |
.docx |
mammoth v1.8.0 | Lazy-loaded from cdn.jsdelivr.net on first use |
.xlsx, .xls |
xlsx v0.18.5 | Lazy-loaded from cdn.jsdelivr.net on first use |
.pptx |
Native ZIP reader + DOMParser |
Browser built-in DecompressionStream + DOMParser for OOXML slide extraction |
Key properties:
DecompressionStream API to extract the ZIP archive and DOMParser to parse
the OOXML slide XML. Shield NOYB supports configurable personas that control the AI’s behavior and communication style:
Built-in personas (5):
| Persona | Purpose |
|---|---|
| Default | Standard balanced assistant behavior |
| Verbose | Detailed, comprehensive responses |
| Concise | Brief, direct responses |
| Friendly | Warm, conversational tone |
| ICANNOTTELLALIE | Maximum truth-seeking mode: the model is instructed to prioritize accuracy over agreeability |
Custom personas:
Storage and selection:
store.setSetting("personas", ...). active_persona_id setting in IndexedDB. Implementation: Persona instructions are injected as the first system message in prompt assembly. The active persona’s instruction text becomes the foundational behavioral directive for the model.
Shield NOYB can learn about the user’s communication patterns and preferences over time, entirely client-side:
Extraction process:
communication_style, recurring_topics, preferences,
emotional_patterns, context_notes. Storage:
user_profile setting in the kv store. version, updatedAt, extractionCount, processedConversationIds
(capped at 100 to bound storage), and traits (the extracted profile data). processedConversationIds tracks which conversations have already been analyzed to avoid
re-processing. User control:
pa_profile_learning flag in localStorage. Privacy properties:
Between the three principals in the system, there exists a trust gap regarding the proxy’s behavior. The user can verify:
But there was no mechanism to verify that the proxy itself is operating under the expected privacy policy. A compromised or misconfigured proxy could, in theory, log headers, inject additional data into forwarded requests, or silently disable EHBP enforcement. The proxy verification receipt system closes this gap.
Each receipt attests the following statement:
For this secure session, the proxy verification backend identified by
proxy_kernel_idattests that it is operating under the audited secure proxy policy identified bypolicy_hash, for the enclave policy identified byenclave_url_hash, and that this receipt is bound to the requesting browser session viasession_nonceanduser_binding.
This statement is binding but deliberately scoped. It proves the policy state of the proxy at receipt issuance time for the specific session. It does not prove the absence of additional code paths or side effects outside the verified proxy kernel.
The receipt envelope is stable across both receipt backends (signed_receipt and
zk_receipt):
{
"version": "1",
"proof_type": "signed_receipt | zk_receipt",
"receipt_id": "rcpt_<base64url-random>",
"policy_hash": "sha256:<hex>",
"client_policy_hash": "sha256:<hex>",
"proxy_kernel_id": "proxy-kernel-v1 | risc0:image:<hex>",
"session_nonce": "<base64url-client-generated>",
"user_binding": "sha256:<hex>",
"transport_mode": "secure_ehbp",
"enclave_url_hash": "sha256:<hex>",
"issued_at": "ISO-8601",
"expires_at": "ISO-8601",
"verification": {
"ehbp_required": true,
"client_authorization_forwarded": false,
"header_allowlist_policy": "strict_v1",
"enclave_policy": "resolved_enclave_url_v1"
},
"backend": {
"name": "phoenix_signed_receipt | pa_zk_receipt_prover",
"instance_id": "<hostname or configured ID>"
},
"signature": { "alg": "Ed25519", "key_id": "...", "sig": "..." } | null,
"proof": { "system": "risc0", "receipt": "...", "public_inputs": {...} } | null
}
Field definitions:
| Field | Description |
|---|---|
version |
Receipt schema version (currently "1") |
proof_type |
"signed_receipt" or "zk_receipt" |
receipt_id |
Server-generated opaque identifier |
policy_hash |
SHA-256 hash of the canonical secure proxy policy |
client_policy_hash |
Client-computed sha256(session_nonce + "|" + canonical_json(expected_policy)). Binds the proof to the policy the client asked for. The zkVM guest recomputes and asserts equality, preventing a malicious server from substituting a different policy |
proxy_kernel_id |
Stable identifier for the verification backend. For signed: "proxy-kernel-v1". For ZK: "risc0:image:<image_id_hex>" |
session_nonce |
Client-generated random nonce (minimum 16 bytes, base64url-encoded) |
user_binding |
One-way binding: sha256(user_id | proxy_token_iat | session_nonce) |
transport_mode |
Must be "secure_ehbp" |
enclave_url_hash |
SHA-256 hash of the resolved enclave URL |
issued_at |
Receipt issuance timestamp (ISO-8601) |
expires_at |
Receipt expiry timestamp. Default TTL: 300 seconds (5 minutes) |
verification |
Human-readable attested policy properties |
backend |
Debug and audit metadata |
signature |
Present for signed receipts, null for ZK |
proof |
Present for ZK receipts, null for signed |
Both receipt backends operate on a single canonical payload, built by
Proxy.VerificationPayload (lib/private_assistant/proxy/verification_payload.ex):
%{
"version" => "1",
"policy_hash" => sha256_tag(canonical_json(policy)),
"client_policy_hash" => validated_client_policy_hash,
"proxy_kernel_id" => Config.proxy_verification_kernel_id(),
"session_nonce" => validated_nonce,
"user_binding" => sha256("user:<id>|token:<proxy_token>|nonce:<nonce>"),
"transport_mode" => "secure_ehbp",
"enclave_url_hash" => sha256_tag(enclave_url),
"issued_at" => DateTime.to_iso8601(now),
"expires_at" => DateTime.to_iso8601(now + ttl),
"verification" => %{
"ehbp_required" => true,
"client_authorization_forwarded" => false,
"header_allowlist_policy" => "strict_v1",
"enclave_policy" => "resolved_enclave_url_v1"
}
}
The canonical JSON serialization uses lexicographically sorted keys and UTF-8 encoding. The SHA-256 digest of the canonical JSON is used as the signing or proving message.
The signed receipt backend (Proxy.VerificationSigned) provides the baseline verification
mechanism:
PROXY_VERIFICATION_PRIVATE_SEED environment variable. The public key is derived
from the seed at runtime and pinned in verification-manifest.json. :crypto.generate_key(:eddsa, :ed25519, seed) to derive
the key pair from the seed. :crypto.sign(:eddsa, :none, canonical_json, [private_key, :ed25519]) with
base64url encoding of the signature. PROXY_VERIFICATION_TTL_SECONDS.
This is a trust-on-key-distribution model. The client trusts the public key because it is
shipped as part of the audited client bundle in verification-manifest.json.
The ZK receipt backend (Proxy.VerificationZk) provides a stronger verification mechanism
using zero-knowledge proofs:
Architecture:
native/pa_zk_receipt_prover/ (port 8091) POST /prove/session on the sidecar.
The sidecar generates a RISC Zero proof and returns the proof artifact.
The guest program (native/pa_zk_receipt_prover/methods/guest/src/main.rs) runs inside the
RISC Zero zkVM. Every assertion below must pass; if any fails, proof generation aborts and
no valid receipt is produced.
Identity assertions:
statement must equal the compile-time constant "session_policy_receipt_v1". verifier.system must be "risc0". verifier.kernel_id must match claim.public_inputs.proxy_kernel_id, cross-binding
the verifier identity to the claim so a receipt cannot be transplanted between binaries. kernel_id must carry the risc0:image: prefix followed by exactly 64 hex digits. Payload integrity:
version must be "1". transport_mode must be "secure_ehbp" — the guest rejects any session that was not
conducted over the encrypted EHBP channel. policy_hash is recomputed inside the guest from the canonical JSON of the verification
policy and compared to the value in public_inputs. This prevents the host from
substituting a different policy after hashing. client_policy_hash is recomputed inside the guest as
sha256(session_nonce + "|" + canonical_json(verification_policy)) and compared to the
value in public_inputs. This binds the proof to the exact policy the client requested.
Since the hash includes the session nonce, it cannot be replayed from a different session.
The browser independently recomputes and verifies this value after receiving the receipt. Policy property assertions:
ehbp_required must be true (encrypted transport is mandatory). client_authorization_forwarded must be false (the proxy must not forward user
credentials to the enclave). enclave_policy must be "resolved_enclave_url_v1". header_allowlist_policy must be "strict_v1". Binding and freshness:
user_binding, enclave_url_hash, and client_policy_hash must be well-formed SHA-256
tagged hashes (sha256: prefix, 64 hex characters, 71 bytes total). session_nonce must decode from base64url to at least 16 bytes, preventing brute-force
replay. issued_at and expires_at are parsed as strict ISO 8601 UTC timestamps
(YYYY-MM-DDThh:mm:ssZ) with calendar validation. expires_at must be strictly after
issued_at.
Commitment computation:
After all assertions pass, the guest computes a deterministic commitment chain:
policy_digest → public_inputs_digest → verifier_digest → claim_digest → proof_id.
Each digest is the SHA-256 of a canonical transcript string. The full chain, together with
the verified fields, is committed to the zkVM journal — the only output a verifier sees.
The signed and ZK receipt backends make fundamentally different trust assumptions:
Signed receipt (Ed25519): Trust-on-key-distribution. The client trusts that the holder of the Ed25519 private seed faithfully evaluated the policy before signing. If the key is compromised, or if the key holder modifies the signing code, the receipt is indistinguishable from a legitimate one. The trust anchor is a key that someone holds.
ZK receipt (RISC Zero): Kernel-identity proof. A valid proof can only be generated by executing the audited guest binary inside the zkVM. The proof is valid if and only if every assertion in the guest program passed for the committed inputs. No private key is involved in proof generation — the trust anchor is code that anyone can audit combined with mathematics that is deterministic. Forging a proof requires either breaking the cryptographic assumptions of the proof system or modifying the guest binary (which changes the image ID and is therefore detectable).
Practical consequence: The signed backend remains available as a graceful-degradation fallback when the ZK sidecar is unreachable (see fallback behavior above). When both are available, the system dispatches to ZK first because it provides a strictly stronger guarantee.
Artifact format: pa_zk_receipt_v1:<hex-encoded JSON>
The decoded artifact contains:
| Field | Description |
|---|---|
artifact_version |
"pa_zk_receipt_artifact_v1" |
statement |
"session_policy_receipt_v1" |
proof_system |
"risc0" |
public_inputs |
The canonical payload fields (subset) |
policy |
The verification policy object |
prover_kind |
"risc0_zkvm_v1" or "sha256_receipt_v1" |
commitments |
Binding commitments (see below) |
zkvm_image_id |
The RISC Zero image ID (hex) |
zkvm_receipt |
Base64url-encoded bincode receipt bytes |
zkvm_journal |
Base64url-encoded bincode journal bytes |
Commitment structure:
The artifact embeds a chain of deterministic commitments that bind the proof to the policy, public inputs, and verifier identity:
policy_digest = sha256(policy_transcript)
public_inputs_digest = sha256(public_inputs_transcript)
verifier_digest = sha256(verifier_transcript(system, kernel_id))
claim_digest = sha256(claim_transcript(policy_digest, public_inputs_digest))
proof_id = sha256(proof_id_transcript(claim_digest, verifier_digest, prover_kind))
Each transcript is a deterministic key-value string (e.g., "key=value\n") that ensures
canonical representation for hashing.
Fallback behavior: Phoenix dispatches to the ZK backend first when configured. If the sidecar is unavailable (connection refused, timeout, invalid response), it falls back to the signed receipt backend automatically:
# lib/private_assistant/proxy/verification.ex
case Config.proxy_verification_backend() do
:zk ->
case VerificationZk.create(payload) do
{:ok, receipt} -> {:ok, receipt}
{:error, :proxy_verification_unavailable} -> VerificationSigned.create(payload)
{:error, reason} -> {:error, reason}
end
:signed ->
VerificationSigned.create(payload)
end
The browser verifies receipts locally using proxy_verification.js and the bundled WASM
verifier:
┌─────────────────────────────────────────────────────────────────────┐
│ RECEIPT VERIFICATION FLOW │
│ │
│ ┌─────────────────┐ │
│ │ Browser generates│ │
│ │ session_nonce │ │
│ └────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────┐ │
│ │ POST /api/proxy/ │ │
│ │ verify/session │ │
│ │ {session_nonce, │ │
│ │ transport_mode, │ │
│ │ client_policy_ │ │
│ │ hash, enclave_url}│ │
│ └────────┬───────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────┐ ┌──────────────────────┐ │
│ │ Phoenix builds │────▶│ Backend: signed or │ │
│ │ canonical payload │ │ zk (with fallback) │ │
│ └────────────────────┘ └──────────┬───────────┘ │
│ │ │
│ ┌─────────────────────────────▼─────────────────────┐ │
│ │ Receipt returned to browser │ │
│ └─────────────────────────────┬─────────────────────┘ │
│ │ │
│ ┌─────────────────────────────▼─────────────────────┐ │
│ │ COMMON CHECKS (both types) │ │
│ │ │ │
│ │ 1. Load verification-manifest.json │ │
│ │ 2. Validate structural fields │ │
│ │ 3. Check session_nonce matches client value │ │
│ │ 4. Check transport_mode == "secure_ehbp" │ │
│ │ 5. Check policy_hash in manifest allowlist │ │
│ │ 6. Check proxy_kernel_id in manifest allowlist │ │
│ │ 7. Check receipt is not expired │ │
│ │ 8. Check enclave_url_hash matches active enclave │ │
│ └──────────────────────┬────────────────────────────┘ │
│ │ │
│ ┌───────────────────┴───────────────────┐ │
│ │ │ │
│ ┌────────▼────────┐ ┌─────────▼──────────┐ │
│ │ signed_receipt │ │ zk_receipt │ │
│ │ │ │ │ │
│ │ Verify Ed25519 │ │ 1. Validate system │ │
│ │ signature via │ │ 2. Decode artifact │ │
│ │ Web Crypto API │ │ 3. Recompute │ │
│ │ against pinned │ │ commitments │ │
│ │ public key from │ │ 4. Check kernel │ │
│ │ manifest │ │ binding │ │
│ │ │ │ 5. Verify receipt │ │
│ │ │ │ via bundled │ │
│ │ │ │ WASM runtime │ │
│ └────────┬────────┘ └─────────┬──────────┘ │
│ │ │ │
│ └───────────────────┬───────────────────┘ │
│ │ │
│ ┌─────────────▼──────────────┐ │
│ │ Update verification panel │ │
│ │ Proxy axis: verified/failed │ │
│ └────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────────┘
Verification manifest (verification-manifest.json):
The manifest is shipped with the client bundle and defines the trust anchors for receipt verification:
{
"version": "1",
"proxy_verification": {
"accepted_policy_hashes": ["sha256:..."],
"accepted_proxy_kernel_ids": ["proxy-kernel-v1", "risc0:image:<hex>"],
"accepted_signature_keys": [
{
"key_id": "proxy-verify-key-2026-03",
"alg": "Ed25519",
"public_key": "<base64url public key>"
}
],
"accepted_zk_systems": ["risc0"],
"accepted_zk_verifiers": [
{
"system": "risc0",
"proxy_kernel_id": "risc0:image:<hex>",
"verifier_id": "risc0:image:<hex>"
}
]
}
}
Signed receipt verification:
signature.key_id from the receipt. accepted_signature_keys. importKey with Ed25519 algorithm). receipt_id, proof_type, backend,
signature, proof). crypto.subtle.verify. ZK receipt verification:
proof.system against accepted_zk_systems. pa_zk_receipt_v1: artifact from hex. zkvm_image_id must match the hex portion of proxy_kernel_id
(after risc0:image: prefix). Client verifier call path (including forwarding and NIF challenge proofs):
app.js installs the bundled verifier runtime at startup via
installBundledRisc0ReceiptVerifier() and installGlobalRisc0ReceiptVerifier()
from assets/js/private_assistant/risc0_zk_receipt_verifier.js. app.js calls
verifyProxyReceipt({ receipt, sessionNonce, enclaveURL }). proxy_verification.js dispatches through verifyReceipt(...) to either
verifySignedReceipt(...) or verifyZkReceipt(...). receipt.forwarding_challenge via verifyForwardingChallenge(...), and receipt.nif_challenge_attestation via verifyNifChallenge(...), which dispatches to
verifySignedNifChallenge(...) or verifyZkNifChallenge(...). verifyForwardingChallenge(...) and verifyZkNifChallenge(...) call
resolveZkReceiptCryptographicVerifier(), which uses the installed global verifier
(__PRIVATE_ASSISTANT_ZK_RECEIPT_VERIFIER__) backed by
risc0_zk_receipt_verifier.wasm. This means the new NIF challenge proof is already part of the Shield Server verification decision when present, even though the current UI does not yet expose it as a separate line item. A later UI revision can break this out into more granular proof rows without changing the underlying browser or WASM verification call path described above.
WASM verifier contract:
The bundled risc0_zk_receipt_verifier.wasm (built from
native/pa_zk_receipt_prover/browser_verifier/src/lib.rs) exports:
| Export | Description |
|---|---|
pa_alloc(size) |
Allocate size bytes in WASM memory, returns pointer |
pa_free(ptr, size) |
Free previously allocated memory |
pa_verify_receipt(img_ptr, img_len, rcpt_ptr, rcpt_len, jrnl_ptr, jrnl_len) |
Verify receipt; returns 1 for valid, 0 for invalid, negative for error |
pa_last_error_len() |
Length of last error message string |
pa_last_error_ptr() |
Pointer to last error message string |
The verifier:
Auto-check on new secure conversation:
When a new secure conversation is started and the SecureClient has successfully initialized:
session_nonce (16+ random bytes, base64url-encoded). POST /api/proxy/verify/session with the nonce, transport mode, and
enclave URL. expires_at. This runs once per new secure conversation, not per request.
Manual “Verify Now”:
The user can trigger a re-check at any time:
session_nonce. Verification panel state:
The UI displays three trust axes:
Client ● [Verified / Check Needed]
Proxy ● [Verified / Check Needed / Expired]
Enclave ● [Verified / Not Verified]
The overall shield icon reflects the composed state. “Verified Private” (green) requires all three axes to be verified and unexpired.
The session receipt (Section 8.3–8.8) proves that the proxy was configured with the correct policy at receipt issuance time. It is a point-in-time policy attestation. The forwarding challenge extends this by proving how the proxy actually handles a concrete request — header rewriting, body integrity, and routing — using a second, independent ZK proof that is attached to the session receipt.
The session receipt attests policy configuration. It does not cover runtime behavior:
the proxy could, in theory, pass the configuration check but then violate the policy during
actual request forwarding (e.g., forwarding the user’s Authorization header to the enclave,
injecting extra headers, modifying the request body, or routing to a different endpoint). The
forwarding challenge closes this gap by running a controlled test request through the same
forwarding code path that handles real user traffic, then generating a ZK proof that the
request was handled according to policy.
The forwarding challenge executes during receipt creation and is automatically attached to the session receipt when enabled:
┌─────────────────────────────────────────────────────────────────────────┐
│ FORWARDING CHALLENGE FLOW │
│ │
│ ┌─────────────────┐ │
│ │ Session receipt │ │
│ │ creation starts │ │
│ └────────┬─────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 1. CONSTRUCT CHALLENGE PAYLOAD │ │
│ │ - Generate random challenge_id, opaque body, fake EHBP key │ │
│ │ - Build test request with known headers: │ │
│ │ authorization, cookie, content-type, │ │
│ │ ehbp-encapsulated-key, x-tinfoil-enclave-url │ │
│ │ - Bind to session via session_nonce and user_binding │ │
│ └────────┬────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 2. RUN THROUGH REAL FORWARDING PATH (Client.run_forwarding_ │ │
│ │ challenge) │ │
│ │ - Request processed by same prepare_request / header filter │ │
│ │ - Hop-by-hop headers stripped (authorization, cookie, etc.) │ │
│ │ - Content-type forced to application/json │ │
│ │ - Server API key injected │ │
│ │ - Enclave URL header consumed for routing, not forwarded │ │
│ │ - EHBP encapsulated key forwarded │ │
│ │ - Request dispatched to enclave (or mock in test) │ │
│ └────────┬────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 3. COLLECT WITNESS │ │
│ │ - Hash of inbound body (before forwarding) │ │
│ │ - Hash of outbound body (after forwarding) │ │
│ │ - Hash of EHBP encapsulated key │ │
│ │ - Hash of forwarded header names │ │
│ │ - Sorted list of actual forwarded header names │ │
│ │ - HTTP dispatch status code │ │
│ └────────┬────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 4. ZK PROOF GENERATION │ │
│ │ - Payload sent to POST /prove/forwarding-challenge │ │
│ │ - RISC Zero guest verifies all forwarding assertions │ │
│ │ - Proof artifact returned and attached to session receipt │ │
│ └────────┬────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 5. SESSION RECEIPT WITH CHALLENGE │ │
│ │ receipt.forwarding_challenge = { challenge envelope + proof } │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
The challenge payload is constructed by Proxy.ForwardingChallenge
(lib/private_assistant/proxy/forwarding_challenge.ex):
| Field | Value | Purpose |
|---|---|---|
challenge_id |
Random 12-byte base64url | Unique challenge identifier |
method |
"POST" |
HTTP method of the test request |
path_family |
Route-family specific, currently "v1/chat/completions" or "v1/embeddings" |
Canonical path family for the covered secure request boundary |
request_body |
Route-family specific JSON containing challenge_id, random opaque data, and session bindings |
Simulates an EHBP-protected request body without relying on real user content |
request_headers |
5 headers: authorization, content-type, cookie, ehbp-encapsulated-key, x-tinfoil-enclave-url |
Tests all header categories: hop-by-hop (should be stripped), EHBP (should be forwarded), routing (should be consumed) |
session_nonce |
From parent session receipt | Binds challenge to the same session |
user_binding |
From parent session receipt | Binds challenge to the same user |
enclave_url_hash |
From parent session receipt | Binds challenge to the same enclave |
transport_mode |
"secure_ehbp" |
Only secure mode is tested |
version |
"1" |
Challenge schema version |
The test request deliberately includes headers from every category:
authorization and cookie — hop-by-hop headers that must be stripped. content-type — should be forced to application/json in secure mode. ehbp-encapsulated-key — EHBP header that must be forwarded to the enclave. x-tinfoil-enclave-url — routing header that must be consumed (not forwarded).
After Client.run_forwarding_challenge processes the test request through the real
forwarding path, it returns a witness map:
| Witness field | Description |
|---|---|
inbound_body_hash |
SHA-256 of the request body before forwarding |
outbound_body_hash |
SHA-256 of the request body after forwarding |
ehbp_header_hash |
SHA-256 of the ehbp-encapsulated-key value |
forwarded_headers_hash |
SHA-256 of the sorted, newline-delimited forwarded header names |
forwarded_header_names |
Sorted list of header names actually forwarded to the enclave |
dispatch_status |
HTTP status code from the enclave (or mock) |
The body integrity invariant is critical: inbound_body_hash must equal
outbound_body_hash. This proves the proxy forwarded the body byte-for-byte without
modification.
The guest program (native/pa_zk_receipt_prover/methods/forwarding_challenge_guest/src/main.rs)
runs inside the RISC Zero zkVM. Every assertion must pass; if any fails, proof generation
aborts.
Identity assertions:
statement must equal "request_forwarding_challenge_v1". verifier.system must be "risc0". verifier.kernel_id must match claim.public_inputs.proxy_kernel_id. kernel_id must carry the risc0:image: prefix followed by exactly 64 hex digits. Transport and version:
version must be "1". transport_mode must be "secure_ehbp". Body integrity:
inbound_body_hash must equal outbound_body_hash — the proxy did not modify the
request body during forwarding. Hash format validation:
user_binding, enclave_url_hash, inbound_body_hash, outbound_body_hash,
ehbp_header_hash, and forwarded_headers_hash must all be well-formed SHA-256 tagged
hashes (sha256: prefix, 64 hex characters). Binding and freshness:
session_nonce must decode from base64url to at least 16 bytes. challenge_id must be non-empty. method must be "POST". path_family must be non-empty. Forwarding policy assertions:
route_mode must be "secure" — the challenge ran through the secure forwarding path. header_rewrite_policy must be "strict_v1". client_authorization_forwarded must be false — the user’s auth header was stripped. server_authorization_injected must be true — the server injected its own API key. enclave_header_forwarded must be false — the routing header was consumed. content_type_forced must be true — content-type was overwritten to application/json. forwarded_header_names must be exactly ["authorization", "content-type", "ehbp-encapsulated-key"] — the three headers that should survive after rewriting. Header hash binding:
forwarded_headers_hash from the
forwarded_header_names list and verifies it matches the public input. This binds the
hash to the actual header list, preventing the host from claiming one set of headers while
hashing a different set.
Commitment chain:
After all assertions pass, the guest computes the same style of deterministic commitment
chain as the session receipt:
policy_digest → public_inputs_digest → verifier_digest → claim_digest → proof_id.
The forwarding challenge uses its own verification policy, distinct from the session receipt policy. The policy captures the runtime behavior of the proxy’s header rewriting:
{
"route_mode": "secure",
"header_rewrite_policy": "strict_v1",
"client_authorization_forwarded": false,
"server_authorization_injected": true,
"enclave_header_forwarded": false,
"content_type_forced": true,
"forwarded_header_names": [
"authorization",
"content-type",
"ehbp-encapsulated-key"
]
}
The forwarded_header_names field is notable: it records the exact set of headers that
reached the enclave after the proxy’s header rewriting. Since the test request included
authorization, cookie, content-type, ehbp-encapsulated-key, and
x-tinfoil-enclave-url, the surviving set proves:
authorization was stripped and replaced by the server-injected one. cookie header was stripped entirely. x-tinfoil-enclave-url routing header was consumed for routing, not forwarded. ehbp-encapsulated-key was forwarded (required for enclave EHBP decryption). content-type was forced to application/json. The forwarding challenge proof artifact follows the same encoding as the session receipt artifact, with its own prefix and version:
Artifact format: pa_zk_forwarding_challenge_v1:<hex-encoded JSON>
The decoded artifact contains:
| Field | Description |
|---|---|
artifact_version |
"pa_zk_forwarding_challenge_artifact_v1" |
statement |
"request_forwarding_challenge_v1" |
proof_system |
"risc0" |
prover_kind |
"risc0_zkvm_v1" |
public_inputs |
The forwarding challenge public inputs (14 fields) |
policy |
The forwarding challenge verification policy |
commitments |
Binding commitments (policy_digest, public_inputs_digest, verifier_digest, claim_digest, proof_id) |
zkvm_image_id |
The RISC Zero image ID for the forwarding challenge guest (hex) |
zkvm_receipt |
Base64url-encoded bincode receipt bytes |
zkvm_journal |
Base64url-encoded bincode journal bytes |
The forwarding challenge is automatically attached to the session receipt during
Verification.create_session_receipt/3:
# lib/private_assistant/proxy/verification.ex
def create_session_receipt(user, proxy_token, attrs) do
with {:ok, payload} <- VerificationPayload.build(user, proxy_token, attrs) do
case create_receipt(payload) do
{:ok, receipt} -> {:ok, ForwardingChallenge.maybe_attach(receipt, payload, attrs)}
{:error, reason} -> {:error, reason}
end
end
end
When enabled (PROXY_VERIFICATION_FORWARDING_CHALLENGE_ENABLED is not false), the
challenge runs synchronously after the session receipt is generated. If the challenge
fails (sidecar unavailable, enclave unreachable for the test request), the session receipt
is returned without the challenge — the session receipt’s own guarantees are not degraded.
The combined receipt envelope includes the forwarding challenge as an additional field:
{
"version": "1",
"proof_type": "signed_receipt | zk_receipt",
"receipt_id": "rcpt_...",
"...session receipt fields...",
"forwarding_challenge": {
"challenge_id": "<base64url>",
"version": "1",
"method": "POST",
"path_family": "v1/chat/completions",
"dispatch_status": 200,
"inbound_body_hash": "sha256:<hex>",
"outbound_body_hash": "sha256:<hex>",
"ehbp_header_hash": "sha256:<hex>",
"forwarded_headers_hash": "sha256:<hex>",
"proxy_kernel_id": "risc0:image:<hex>",
"session_nonce": "<base64url>",
"user_binding": "sha256:<hex>",
"enclave_url_hash": "sha256:<hex>",
"transport_mode": "secure_ehbp",
"issued_at": "ISO-8601",
"verification": { "...forwarding policy..." },
"proof_type": "zk_forwarding_challenge",
"backend": { "name": "pa_zk_receipt_prover", "instance_id": "..." },
"proof": {
"system": "risc0",
"receipt": "pa_zk_forwarding_challenge_v1:<hex>",
"public_inputs": { "...14 fields..." }
}
}
}
The two proofs are complementary and cover different assurance levels:
| Property | Session receipt | Forwarding challenge |
|---|---|---|
| What it proves | Proxy policy configuration at issuance time | Proxy behavior on a concrete test request |
| Scope | Policy flags (EHBP required, header allowlist, etc.) | Actual header rewriting, body integrity, routing |
| Guest binary |
Session receipt guest (methods/guest/) |
Forwarding challenge guest (methods/forwarding_challenge_guest/) |
| Image ID | Separate RISC Zero image ID | Separate RISC Zero image ID |
| Attached to | Top-level receipt envelope |
forwarding_challenge field within the receipt |
| Failure behavior | Receipt creation fails | Challenge omitted; session receipt still returned |
| Per-request proof | No — point-in-time attestation | Closer — proves behavior on one concrete request, but the test request is synthetic, not the user’s actual request |
Combined assurance: When both proofs are present, the user has evidence that (a) the proxy was configured with the correct policy, and (b) the proxy’s forwarding code path actually implements that policy. Neither proof alone is sufficient for full assurance: the session receipt could pass with a correct config but broken forwarding code, and the forwarding challenge could pass on a test request but fail to cover all code paths. Together, they narrow the gap significantly.
More precisely, the forwarding challenge should be read as sufficient for a narrow claim:
it covers the request-side forwarding behavior of the secure POST /v1/chat/completions
path because the live secure route and the challenge route share the same
prepare_request / forward_request_headers pipeline. It should not be interpreted as a
proof for every endpoint, every request shape, or the full response/streaming path.
The forwarding challenge proves that the proxy handled one synthetic request correctly. The NIF-measured challenge goes one step further by moving the verification-time witness capture into a Rust NIF that runs inside the Phoenix runtime boundary.
This does not prove properties of the whole BEAM runtime. It proves properties of the measured challenge boundary:
decrypt_call_count = 0, and The original sidecar-only model required Phoenix to tell the Rust prover what happened during the verification challenge. The NIF reduces that gap for the measured challenge path because the transcript is captured in-process before it is sent to the Rust sidecar for proof generation.
The NIF is intentionally narrow:
PROXY_VERIFICATION_NIF_CHALLENGE_ENABLED=true, and This makes it a stronger measurement point for one privacy-critical path without turning the entire server into a Rust runtime.
The NIF challenge transcript records:
| Field | Purpose |
|---|---|
challenge_id |
Binds the transcript to the forwarding challenge |
session_nonce |
Binds the transcript to the same browser verification session |
transport_mode |
Must be secure_ehbp |
method / path_family |
Binds the transcript to the same secure route shape as the forwarding challenge |
inbound_body_hash |
Hash of the challenge ciphertext entering the measured path |
outbound_body_hash |
Hash of the challenge ciphertext leaving the measured path |
ehbp_header_hash |
Hash of the EHBP encapsulated key header |
enclave_url_hash |
Hash of the enclave target |
forwarded_headers_hash |
Hash of the actual forwarded header-name set |
decrypt_call_count |
Must be zero |
capabilities.* |
NIF-managed no-decrypt capability flags; all must be false |
The current capability flags are:
local_decrypt_key_loaded decrypt_provider_enabled kms_unwrap_enabled legacy_plaintext_fallback_enabled debug_proxy_enabled
The guest at native/pa_zk_receipt_prover/methods/nif_challenge_guest/src/main.rs asserts:
statement == "nif_challenge_attestation_v1" transport_mode == "secure_ehbp" inbound_body_hash == outbound_body_hash decrypt_call_count == 0 route_mode == "secure" header_rewrite_policy == "strict_v1" client_authorization_forwarded == false server_authorization_injected == true enclave_header_forwarded == false content_type_forced == true forwarded_header_names == ["authorization", "content-type", "ehbp-encapsulated-key"] false So the NIF proof is not just “no decrypt.” It is “no decrypt within the measured challenge boundary, while still satisfying the secure forwarding policy.”
When PROXY_VERIFICATION_BACKEND=zk, Phoenix tries to attach:
receipt.forwarding_challenge as zk_forwarding_challenge receipt.nif_challenge_attestation as zk_nif_challenge
If the NIF challenge ZK sidecar path is unavailable, Phoenix falls back to a signed
signed_nif_challenge artifact using the same Ed25519 verification key already pinned in
the verification manifest. This keeps the verification flow fault-tolerant.
The ZK artifact format is:
pa_zk_nif_challenge_v1:<hex-encoded JSON>
Its contents mirror the forwarding challenge artifact shape with NIF-specific public inputs and policy fields.
What it proves:
What it does not prove:
The NIF challenge should therefore be interpreted as a stronger verification-time spot check of the privacy-critical path, not as a proof about the entire Phoenix runtime.
The ZK receipts (session and forwarding challenge) provide strong guarantees within a defined scope. This section makes that scope explicit.
What the session receipt attests:
ehbp_required = true). client_authorization_forwarded = false). header_allowlist_policy = "strict_v1"). enclave_policy = "resolved_enclave_url_v1"). user_binding) and contains a
cryptographically fresh nonce. issued_at and expires_at timestamps form a valid, forward-moving window. What the forwarding challenge attests (when present):
authorization, cookie) from the test request. content-type to application/json in secure mode. Authorization header. inbound_body_hash == outbound_body_hash). What the NIF challenge attests (when present):
inbound_body_hash == outbound_body_hash). decrypt_call_count = 0. What the proofs do NOT attest:
issued_at and expires_at — only that it was correct at issuance and that the
forwarding code path was correct when the challenge ran. The current proof system is intentionally route-scoped. The table below states the assurance level that may be claimed today for the main proxy routes, and where future circuit extensions would be needed.
| Route / Flow | Current receipt coverage | Current forwarding/NIF coverage | What may be claimed today | Future extension point |
|---|---|---|---|---|
POST /api/proxy/secure/v1/chat/completions request path |
Session receipt covers secure proxy policy state for the session | Yes. Forwarding challenge covers the request-side forwarding pipeline for this route shape. NIF challenge, when present, adds no-local-decrypt measurement for the same challenge boundary. Search and image features currently ride this same endpoint, so they inherit this path-family coverage but are not separately proved request-shape variants. | Strongest current claim. The secure chat-completions request path uses the audited request-forwarding logic and preserves the proved structural invariants. | Add route-specific response/streaming proofs if response-side guarantees are needed. |
POST /api/proxy/secure/v1/chat/completions response / SSE streaming path |
Session receipt still applies at policy level only | No route-specific forwarding proof for streamed response handling | Only policy-level claims apply. Do not describe the current forwarding challenge as a proof of the full streamed response path. | Add a response-side or end-to-end chat streaming circuit if stronger claims are needed. |
POST /api/proxy/secure/v1/embeddings request path |
Session receipt covers secure proxy policy state for the session | Yes. A supplemental forwarding challenge covers the request-side forwarding pipeline for this route shape. NIF challenge, when present, adds no-local-decrypt measurement for the same request boundary. | The secure embeddings request path now has the same request-side structural assurance model as the covered chat request path. | Add response-side coverage only if a stronger end-to-end embeddings proof model becomes necessary. |
Other * /api/proxy/secure/*path routes |
Session receipt covers secure proxy policy state for the session | No route-specific forwarding challenge today | Only session-policy claims apply unless the route is explicitly covered by its own challenge. | Add per-route circuits or a broader challenge family with clearly stated route scope. |
POST /api/proxy/document/convert |
Not part of the secure AI proxy proof story | None |
No forwarding-proof claim should be made based on the current receipt/challenge system. This route is not a secure_ehbp route today, so the current forwarding/NIF proofs should not be stretched to cover it. |
Define a separate assurance model if this route needs proof coverage. |
* /api/proxy/debug/*path |
Explicitly out of scope | None | No proof claim. Debug mode is outside the secure proof model. | None unless debug mode is retired or redesigned into the secure model. |
legacy_json transport |
Explicitly out of scope | None |
No proof claim. Current receipts/challenges are for secure_ehbp only. |
None unless a separate legacy-mode assurance model is introduced. |
The main remaining limitation is therefore not proof soundness on the covered
chat/completions request path, but proof coverage across the rest of the route surface.
The current revision added a receipt-shape extension rather than replacing the original contract:
receipt.forwarding_challenge and receipt.nif_challenge_attestation remain the primary
proof slots for POST /api/proxy/secure/v1/chat/completions. receipt.route_proofs. POST /api/proxy/secure/v1/embeddings. Operationally, this revision did not require changes to the bundle pinning or package sync scripts because it did not add new verifier assets, manifest files, or package publication surfaces. The change is in proof attachment and verification semantics, not in the set of published bundle artifacts.
The combined session receipt and forwarding challenge translate into concrete user-facing privacy guarantees. This section frames the technical assertions from Sections 8.3–8.10 in terms of the privacy outcomes they produce:
Anonymous usage. client_authorization_forwarded = false combined with
server_authorization_injected = true means the AI enclave never receives the user’s
identity credentials. The user’s proxy token is stripped and replaced with the server’s
API key, ensuring the AI processes messages without any connection to a specific account.
Enforced encryption. ehbp_required = true combined with
transport_mode = "secure_ehbp" means the proxy provably requires EHBP encryption on
all forwarded traffic. The proof can only be generated when this enforcement is active —
the server cannot silently downgrade to plaintext forwarding without invalidating the
receipt.
Data minimization. header_allowlist_policy = "strict_v1" combined with the
forwarding challenge’s verified forwarded_header_names = ["authorization", "content-type", "ehbp-encapsulated-key"] locks outgoing metadata to an exact three-header
list. Of the dozens of headers a browser normally sends (cookies, user-agent, referer,
accept-language, etc.), only these three technical headers survive. This is not a best-effort
filter — it is a mathematically verified inventory.
Message integrity. The forwarding challenge’s
inbound_body_hash == outbound_body_hash assertion proves that the proxy forwarded the
request body byte-for-byte without modification. Combined with EHBP encryption, this means
the server can neither read nor alter the user’s message content in transit.
Correct destination. The enclave_url_hash binding in both the session receipt and the
forwarding challenge cryptographically ties the session to a specific, verified AI enclave.
The server cannot silently redirect messages to a different inference endpoint without
breaking both proofs.
Trust model upgrade. Unlike signed receipts, which follow a trust-on-key model (the verifier trusts the key holder not to issue fraudulent receipts), ZK receipts follow a trust-on-math model. The proof can only be generated if the audited guest program ran inside the zkVM and every assertion passed. No private key is involved in proof generation, so there is no key to compromise. The guarantee comes from the mathematical properties of the proof system, not from the integrity of any single party.
The standard web application trust model requires the user to trust the server to serve honest JavaScript on every page load. If the server operator or an attacker with server access modifies the served bundle, the client has no inherent mechanism to detect the tampering. This is the single largest trust assumption in the system, because a compromised bundle could exfiltrate all browser-side data.
IPFS bundle pinning provides a partial mitigation by creating an immutable, content-addressed reference for the audited client code. The browser can compare the served code against this reference to detect modifications.
The system implements four pinning flows. The first two pin compiled artifacts that the browser verifies at runtime. The second two pin source code so that third-party auditors can reproduce the compiled artifacts and verify kernel IDs.
Flow 1: Client bundle (compiled artifact)
Pins the executable client code:
assets/app.js + assets/app.css scripts/pin_client_bundle.sh priv/static/bundle-cid.json PUBLIC_BUNDLE_CID, PUBLIC_BUNDLE_GATEWAY Flow 2: Proxy verification assets (compiled artifact)
Pins the browser-side verifier assets used for receipt validation:
assets/risc0_zk_receipt_verifier.wasm + verification-manifest.json scripts/pin_proxy_verification_bundle.sh priv/static/proxy-verification-bundle.json PUBLIC_PROXY_VERIFICATION_BUNDLE_CID,
PUBLIC_PROXY_VERIFICATION_BUNDLE_GATEWAY Flow 3: Rust ZK source bundle (auditor reference)
Pins the source code needed to reproduce the ZK guest program kernel IDs and the browser WASM verifier binary. This includes the three RISC Zero guest programs (session receipt, forwarding challenge, NIF challenge), the shared Rust modules they reference, the browser WASM verifier source, all Cargo.toml and Cargo.lock files for reproducible builds, and the WASM build script:
native/pa_zk_receipt_prover/ and scripts/ scripts/pin_zk_source_bundle.sh priv/static/zk-source-bundle.json PUBLIC_ZK_SOURCE_BUNDLE_CID, PUBLIC_ZK_SOURCE_BUNDLE_GATEWAY
An auditor can fetch this bundle by CID, compile the guest programs with the specified Rust
and RISC Zero toolchain versions, and confirm the resulting kernel IDs match those in the
verification-manifest.json. Similarly, compiling the browser verifier to WASM and hashing
the output should reproduce the WASM artifact CID from Flow 2.
Flow 4: JavaScript source bundle (auditor convenience)
Pins the pre-compilation JavaScript source files for the client-side verification logic,
along with package.json and package-lock.json for build reproducibility:
assets/js/private_assistant/ and assets/ scripts/pin_js_source_bundle.sh priv/static/js-source-bundle.json PUBLIC_JS_SOURCE_BUNDLE_CID, PUBLIC_JS_SOURCE_BUNDLE_GATEWAY
This is a convenience pin. The compiled app.js CID from Flow 1 is the runtime trust
anchor. The source pin lets auditors read the human-readable verification logic rather than
reverse-engineering the minified bundle.
Combined pinning:
scripts/pin_all_verification_bundles.sh runs all flows in sequence.
The scripts use Pinata’s pinFileToIPFS endpoint with directory-style multipart upload:
PINATA_JWT (preferred) or PINATA_API_KEY + PINATA_API_SECRET. PINATA_GATEWAY. .env backup before updating environment
variables. The browser performs a plain hash comparison:
app.js and app.css from the current page. This check does not use zero-knowledge proofs. It is a direct content comparison.
priv/static/bundle-cid.json:
{
"version": "1",
"cid": "bafy...",
"created_at": "2026-03-07T16:50:00Z",
"gateway_base_url": "https://example-gateway.mypinata.cloud",
"bundle_url": "https://example-gateway.mypinata.cloud/ipfs/bafy...",
"files": ["assets/app.css", "assets/app.js"]
}
priv/static/proxy-verification-bundle.json:
Same schema, listing the verification asset files.
bundle-cid.json. Users authenticate via one of two methods:
/auth/google), configured with the openid email profile
scope. This is the primary authentication method. mix phx.gen.auth.
Authenticated sessions are stored in a signed cookie (_private_assistant_key) using
Plug.Session with :cookie store and a signing salt. The cookie is signed (tamper-proof)
but not encrypted (readable by the client).
The proxy token is a short-lived credential that authorizes the browser to make proxy requests:
POST /api/proxy/token with the session cookie and CSRF token. Phoenix.Token signed with the salt "proxy_token_v1", containing
the user ID and issuance timestamp. PROXY_TOKEN_TTL_SECONDS. ProxySession.cachedToken) and refreshes it 15
seconds before expiry. Authorization: Bearer <proxy_token>. RequireProxyToken plug verifies the token signature, checks expiry, and looks up
the user.
The proxy token is never stored in localStorage or IndexedDB. It exists only in
JavaScript memory and is lost on page refresh (requiring a new token from the session).
The Tinfoil API key (TINFOIL_API_KEY) is stored only in the server’s environment variables
and is never sent to the browser. When forwarding a request upstream:
Authorization header (it is in the @hop_by_hop_headers
reject list). Authorization: Bearer <TINFOIL_API_KEY> header. This creates a credential boundary:
The Phoenix backend uses SQLite via Ecto. The only tables are those generated by
mix phx.gen.auth:
users – email, hashed_password, confirmed_at users_tokens – token (binary), context, sent_to, user_id, inserted_at
There are no tables for conversations, messages, memories, embeddings, personas, profiles, or
any other content-related data. The Ecto migration
(20260304080143_create_users_auth_tables.exs) creates only the authentication tables.
The Phoenix application tree (Application.start/2) starts the following supervised
processes:
| Process | Purpose | Stores user content? |
|---|---|---|
Telemetry |
Metrics collection (request timing, VM stats) | No |
Repo |
Ecto database connection pool | No (auth tables only) |
Ecto.Migrator |
Runs pending migrations at startup | No |
DNSCluster |
DNS-based clustering (multi-node) | No |
Phoenix.PubSub |
Pub/sub for LiveView | No |
Finch |
HTTP client pool for upstream requests | No (transient connections) |
Endpoint |
HTTP server (Bandit adapter) | No |
None of these processes store or cache user content. The Finch HTTP client pool handles
upstream connections but does not retain request or response bodies after streaming completes.
The proxy client explicitly avoids logging request or response content in secure mode:
log_plaintext option defaults to false for all modes except :debug. maybe_log_plaintext_request/3 function is a no-op when log_plaintext is false. "Secure proxy failed: :timeout"), never body content. :info, suppressing debug-level messages.
The CacheBodyReader at the endpoint level reads and caches the full request body in
conn.assigns[:raw_body] for all requests. This is necessary because Plug.Parsers consumes
the body, and the proxy needs the raw bytes for forwarding.
For secure_ehbp requests:
Plug.Parsers is configured with pass: ["*/*"], so EHBP-encrypted bodies pass through
without being decoded into conn.params.
The legacy_json transport mode and the /api/proxy/debug/* path allow plaintext proxy
requests for development. The following controls prevent misuse:
Production hard-fail. runtime.exs raises an exception at boot if
PLAINTEXT_DEBUG_MODE is enabled in production:
if config_env() == :prod and plaintext_debug_mode do
raise "PLAINTEXT_DEBUG_MODE cannot be enabled in production."
end
Origin allowlist. Debug requests must include an Origin header matching the
configured allowlist (default: http://localhost:4000, http://127.0.0.1:4000).
UI lockout. When debug mode is disabled server-side, the debug option in the Proxy Mode dropdown is visually disabled.
The proxy client strips the following headers before forwarding upstream:
authorization, connection, content-length, cookie, host,
keep-alive, proxy-authenticate, proxy-authorization, te, trailer,
transfer-encoding, upgrade. x-tinfoil-enclave-url (used for routing but not forwarded to the
enclave).
The proxy then injects its own Authorization: Bearer <TINFOIL_API_KEY> header. This
ensures:
The X-Tinfoil-Enclave-Url header is validated before use:
defp valid_https_url?(url) do
case URI.parse(url) do
%URI{scheme: "https", host: host}
when is_binary(host) and byte_size(host) > 0 -> true
_ -> false
end
end
Non-HTTPS URLs are rejected with 400 invalid_x_tinfoil_enclave_url. This prevents SSRF
attacks where a malicious client could direct the proxy to forward requests to an arbitrary
HTTP endpoint.
When secure_ehbp is selected but the SecureClient fails to initialize (network error,
enclave verification failure, missing browser APIs), the client falls back to legacy_json
transport with a visible status message:
setStatus(`Secure transport unavailable: ${error.message}. Using legacy transport.`)
This fallback is intentional for development environments. In legacy_json mode, request
bodies are plaintext JSON, and the proxy can read them. The status bar informs the user.
For production hardening, this fallback could be replaced with fail-closed behavior where the client refuses to send any data if secure transport is unavailable.
The client-side i18n.js module provides a lightweight internationalization system:
Supported locales: en (English), ja (Japanese), id (Indonesian), zh (Chinese).
Binding mechanism: HTML elements use data-i18n attributes to specify translation keys.
The i18n module scans the DOM for these attributes and replaces the text content with the
translated string for the active locale.
Scope:
Locale selection: The active locale is stored as a user setting and persists across sessions.
Relationship to privacy: Internationalization is a UI-only concern. Translation tables
are bundled with the client code. No server calls are made for translation. The selected
locale is stored in localStorage (key: pa_locale) and is not sent to the server.
Shield NOYB uses Capacitor 8.2 to wrap the web application for mobile platforms:
The mobile wrappers preserve all the privacy properties of the web application:
| # | Risk | Impact | Mitigation Status | Details |
|---|---|---|---|---|
| 1 | Compromised server operator | Server operator modifies served JS to exfiltrate browser data | Partially mitigated | IPFS bundle CID check provides detection (Section 9). However, the check is post-load (detective, not preventive). Full mitigation would require SRI, browser extension verification, or static deployment to content-addressed storage. |
| 2 | Browser compromise | Malware, malicious extension, or compromised browser process accesses IndexedDB, JS memory, and Web Crypto key material | Partially mitigated | Local Vault encrypts records at rest (protects against offline extraction). Does not protect against active browser process compromise. |
| 3 | Enclave compromise | Vulnerability in enclave runtime or supply chain attack on enclave software gives attacker access to plaintext queries and responses | Mitigated by attestation | Browser verifies code measurements and runtime attestation before sending data. Does not protect against vulnerabilities in the attested code itself. |
| 4 | Traffic analysis | Network observer infers usage patterns, approximate query length from encrypted body sizes and request timing | Not mitigated | EHBP encrypts bodies but does not pad to fixed size or add timing jitter. Standard TLS traffic analysis techniques apply. |
| 5 | sqlite-vec CDN loading |
sqlite-vec WASM loaded from cdn.jsdelivr.net with @latest tag and no integrity verification; compromised CDN could serve arbitrary code |
Not mitigated (contained) | Module operates on local data only, does not handle encryption or network communication. Cannot bypass EHBP or exfiltrate data through the proxy without separately compromising the Tinfoil SDK or proxy token. Fallback to brute-force cosine similarity preserves function. |
| 6 | Document parser CDN loading |
pdfjs-dist, mammoth, xlsx loaded from cdn.jsdelivr.net on first use; no integrity verification |
Not mitigated (contained) | Libraries operate on local data only, performing no network or cryptographic operations. Cannot exfiltrate data through the proxy without a separate compromise. Impact limited to local document parsing corruption. |
| 7 | Session cookie scope | Session cookie is signed but not encrypted; cookie contents can be read by interceptors | Low risk |
Session contains only the user token reference, not sensitive content. Setting encryption_salt would add defense in depth. |
| 8 | IPFS gateway compromise | Compromised gateway serves modified content matching a tampered bundle, defeating CID check | Not mitigated | The CID check depends on the gateway serving authentic content. Multiple gateways could be checked for redundancy. |
| 9 | Receipt replay | A captured receipt could be replayed to a different user’s verification panel | Mitigated |
Receipts are bound to session_nonce (client-generated) and user_binding (server-derived from user ID + proxy token + nonce). Short TTL (5 min) limits the window. |
| 10 | ZK sidecar compromise | Compromised ZK sidecar generates fraudulent proofs | Partially mitigated | Commitment recomputation in the browser catches inconsistent artifacts. Cryptographic verification via the bundled WASM runtime catches invalid RISC Zero receipts. The relevant boundary is receipt/image verification, not whether the sidecar is colocated with Phoenix. A sufficiently sophisticated compromise of the sidecar’s private key material could generate valid-looking proofs. |
| 11 | Profile learning leakage | Accumulated user profile data represents a privacy-sensitive summary of communication patterns | Mitigated by design |
Profile stored only in IndexedDB (never server-side). User can disable profile learning via pa_profile_learning toggle. User can view and download their profile. Profile is a summary, not a transcript. |
| 12 | Forwarding challenge coverage |
The forwarding challenge tests one synthetic request shape (POST to v1/chat/completions with 5 specific headers). Other method/path/header combinations are not covered |
Partially mitigated |
The challenge exercises the same prepare_request / forward_request_headers code path used by all real requests. The header set was chosen to test all rewriting categories (strip, forward, force, consume). Additional request shapes could be added in future versions. |
| # | Guarantee | Confidence | Basis |
|---|---|---|---|
| 1 | Query plaintext is not visible to the Phoenix server | High |
EHBP encrypts the body before it leaves the browser. The server holds only ciphertext. The server enforces the presence of the Ehbp-Encapsulated-Key header, rejecting unencrypted bodies on the secure path. |
| 2 | Response plaintext is not visible to the Phoenix server | High | EHBP encrypts the response at the enclave. The server streams ciphertext chunks without decryption capability. |
| 3 | Conversation history is not stored server-side | Definitive | No server-side schema, table, or process exists for conversation or message storage. All storage is IndexedDB in the browser. |
| 4 | Embedding vectors are not stored server-side | Definitive | Embeddings are stored in IndexedDB and the in-memory WASM sqlite-vec index, both in the browser. No server-side embedding storage exists. |
| 5 | Context retrieval runs entirely client-side | Definitive | Similarity search, top-K selection, and prompt assembly are JavaScript functions in the browser. No server endpoint is involved in RAG operations. |
| 6 | Document parsing runs entirely client-side | Definitive |
document_parser.js reads files via the File API and parses them in JavaScript. File bytes never reach the server. |
| 7 | Persona and profile data are not stored server-side | Definitive |
Personas and user profiles are stored in IndexedDB kv store. No server-side persona or profile tables exist. |
| 8 | The upstream API key is not exposed to the browser | Definitive |
The proxy strips the incoming Authorization header and injects the server-side API key. The browser never receives or sends the Tinfoil API key. |
| 9 | The per-user proxy token is not exposed to the enclave | Definitive |
The proxy strips the incoming Authorization header before forwarding. The enclave sees only the server-level API key. |
| 10 | The Tinfoil SDK is bundled locally, not loaded from a CDN | Definitive |
The tinfoil npm package (v1.1.3) is installed, bundled by esbuild, and served from static assets. No runtime external fetch for the encryption library. |
| 11 | Enclave identity is verified before data is sent | High |
The SecureClient verifies the attestation document and HPKE public key. The application checks getVerificationDocument().securityVerified and refuses to proceed if verification fails. |
| 12 | Proxy policy can be verified by the user | High | Signed receipts provide Ed25519-backed policy attestation. ZK receipts provide zero-knowledge proof of policy execution. Both are verified locally against a pinned manifest. |
| 13 | Client bundle integrity can be checked by the user | Moderate | IPFS CID comparison detects modifications to the served bundle. The check is detective (post-load), not preventive. |
| 14 | Proxy forwarding behavior can be verified | High | When the forwarding challenge is enabled, a ZK proof attests that the proxy’s forwarding code path correctly strips hop-by-hop headers, preserves body integrity, forwards the EHBP key, and injects the server API key. The proof covers one synthetic test request per receipt issuance. |
For these guarantees to hold, the following assumptions must be satisfied:
The user’s browser and device are not compromised. A compromised browser can access all in-memory and IndexedDB data regardless of EHBP encryption.
The Tinfoil enclave runtime is executing the attested code faithfully. Remote attestation verifies code measurements, but does not protect against vulnerabilities in the attested code itself.
The server is serving the unmodified JavaScript bundle. This is the standard web trust model. IPFS CID checking provides detective (not preventive) assurance.
The cryptographic primitives are implemented correctly. HPKE, AES-256-GCM, PBKDF2 (Web Crypto API), Ed25519 (Erlang :crypto), and RISC Zero (Rust zkVM) are assumed to be correctly implemented.
The verification manifest is part of the audited client bundle. The manifest’s integrity depends on the same bundle integrity guarantee as the rest of the client code.
| Level | Definition |
|---|---|
| Definitive | The guarantee follows directly from the architecture. No code path exists that could violate it under any configuration or operating condition. |
| High |
The guarantee holds under normal operating conditions with secure_ehbp active. It depends on cryptographic assumptions and correct protocol implementation. |
| Moderate | The guarantee provides meaningful assurance but has known limitations (e.g., detective vs. preventive, dependency on external services). |
| Variable | Required | Default | Description |
|---|---|---|---|
TINFOIL_BASE_URL |
No |
https://inference.tinfoil.sh |
Debug proxy target (OpenAI-style API base) |
TINFOIL_ATTESTATION_URL |
No |
https://atc.tinfoil.sh/attestation |
Attestation proxy route target |
TINFOIL_API_KEY |
Yes (for proxy) | None | Upstream API key, injected by proxy |
TINFOIL_ENCLAVE_URL |
No | Unset |
Secure proxy fallback (used if request omits X-Tinfoil-Enclave-Url) |
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_CLIENT_ID |
Yes | None | Google OAuth client ID |
GOOGLE_CLIENT_SECRET |
Yes | None | Google OAuth client secret |
SECRET_KEY_BASE |
Prod only | None | Phoenix endpoint signing secret |
| Variable | Required | Default | Description |
|---|---|---|---|
PROXY_TOKEN_TTL_SECONDS |
No |
300 |
Proxy token validity window in seconds |
| Variable | Required | Default | Description |
|---|---|---|---|
PROXY_VERIFICATION_BACKEND |
No |
signed |
Receipt backend: signed or zk |
PROXY_VERIFICATION_PRIVATE_SEED |
No | None | Ed25519 private seed (32 bytes, base64url). Required for signed receipts |
PROXY_VERIFICATION_KEY_ID |
No |
proxy-verify-key-2026-03 |
Key identifier for signed receipts |
PROXY_VERIFICATION_KERNEL_ID |
No |
proxy-kernel-v1 |
Proxy kernel identifier for signed receipts |
PROXY_VERIFICATION_TTL_SECONDS |
No |
300 |
Receipt validity window in seconds |
PROXY_VERIFICATION_INSTANCE_ID |
No |
HOSTNAME or "local" |
Backend instance identifier for audit |
PROXY_VERIFICATION_ZK_SYSTEM |
No |
risc0 |
ZK proof system identifier |
PROXY_VERIFICATION_ZK_URL |
No |
http://127.0.0.1:8091 |
ZK sidecar HTTP endpoint |
PROXY_VERIFICATION_ZK_KERNEL_ID |
No |
risc0:image:placeholder |
RISC Zero image ID for ZK session receipts |
PROXY_VERIFICATION_FORWARDING_CHALLENGE_ZK_KERNEL_ID |
No |
risc0:image:placeholder |
RISC Zero image ID for ZK forwarding challenge proofs (falls back to PROXY_VERIFICATION_ZK_KERNEL_ID if unset) |
PROXY_VERIFICATION_NIF_CHALLENGE_ZK_KERNEL_ID |
No |
risc0:image:placeholder |
RISC Zero image ID for ZK NIF challenge proofs (falls back to PROXY_VERIFICATION_ZK_KERNEL_ID if unset) |
PROXY_VERIFICATION_FORWARDING_CHALLENGE_ENABLED |
No |
true |
Enable forwarding challenge proof attached to session receipts |
PROXY_VERIFICATION_NIF_CHALLENGE_ENABLED |
No |
false |
Enable verification-time NIF challenge measurement and attachment |
| Variable | Required | Default | Description |
|---|---|---|---|
PUBLIC_BUNDLE_CID |
No | None | IPFS CID of the pinned client bundle |
PUBLIC_BUNDLE_GATEWAY |
No | None | IPFS gateway base URL for client bundle |
PUBLIC_PROXY_VERIFICATION_BUNDLE_CID |
No | None | IPFS CID of the pinned verification assets |
PUBLIC_PROXY_VERIFICATION_BUNDLE_GATEWAY |
No | None | IPFS gateway base URL for verification assets |
PUBLIC_ZK_SOURCE_BUNDLE_CID |
No | None | IPFS CID of the pinned Rust ZK source bundle |
PUBLIC_ZK_SOURCE_BUNDLE_GATEWAY |
No | None | IPFS gateway base URL for ZK source bundle |
PUBLIC_JS_SOURCE_BUNDLE_CID |
No | None | IPFS CID of the pinned JavaScript source bundle |
PUBLIC_JS_SOURCE_BUNDLE_GATEWAY |
No | None | IPFS gateway base URL for JS source bundle |
PINATA_JWT |
No | None | Pinata JWT for IPFS pinning |
PINATA_API_KEY |
No | None | Pinata API key (fallback auth) |
PINATA_API_SECRET |
No | None | Pinata API secret (fallback auth) |
PINATA_GATEWAY |
No | None | Dedicated Pinata gateway domain |
| Variable | Required | Default | Description |
|---|---|---|---|
PLAINTEXT_DEBUG_MODE |
No |
true (dev), false (prod) |
Enable debug proxy path. Production boot fails if true |
PLAINTEXT_DEBUG_ALLOWED_ORIGINS |
No |
http://localhost:4000,http://127.0.0.1:4000 |
Comma-separated origin allowlist for debug requests |
| Variable | Required | Default | Description |
|---|---|---|---|
DEFAULT_CHAT_MODEL |
No |
gpt-oss-120b |
Default chat model for UI |
DEFAULT_EMBEDDING_MODEL |
No |
nomic-embed-text |
Default embedding model for UI |
DEFAULT_RETRIEVAL_TOP_K |
No |
6 |
Number of memory chunks to retrieve |
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_PATH |
Prod only | None | SQLite database file path |
PHX_HOST |
Prod only |
example.com |
Public hostname |
PORT |
Prod only |
4000 |
HTTP listener port |
PHX_SERVER |
Release mode | Unset |
Set true for release boot |
| Variable | Required | Default | Description |
|---|---|---|---|
CARGO_BIN |
No |
cargo or rustup run stable cargo |
Cargo binary for WASM verifier build |
RUSTUP_BIN |
No |
$HOME/.cargo/bin/rustup |
Rustup binary for stable toolchain |
config/runtime.exs Proxy.Config (@defaults map)
Proxy.Config reads from Application.get_env(:private_assistant, :proxy, []) and falls
back to @defaults only if the key is absent.
POST /api/proxy/token
| Field | Value |
|---|---|
| Auth |
Browser session (require_authenticated_user) |
| Request body | None |
| Success |
201 with {"token": "<proxy_token>", "expires_at": "ISO-8601", "ttl_seconds": 300} |
| Failure | Unauthenticated requests redirect to login flow |
GET /api/proxy/attestation
| Field | Value |
|---|---|
| Auth |
Authorization: Bearer <proxy_token> |
| Behavior |
Proxied GET to TINFOIL_ATTESTATION_URL with server-side API key |
| Success |
200 passthrough from upstream |
| Errors |
401 invalid token, 503 missing API key, 502 proxy failed |
* /api/proxy/secure/*path
| Field | Value |
|---|---|
| Auth |
Authorization: Bearer <proxy_token> |
| Methods |
All HTTP methods (match :*) |
| Required header |
X-Tinfoil-Enclave-Url: https://... (unless server fallback configured) |
| EHBP enforcement |
Request body with missing/invalid Ehbp-Encapsulated-Key returns 400 |
| Target URL |
Enclave URL + /*path |
| Success | Upstream status/body streamed back (supports chunked/SSE) |
| Errors |
401 invalid token, 400 missing/invalid enclave URL, 400 missing/invalid EHBP header, 503 missing API key, 502 proxy failed |
* /api/proxy/debug/*path
| Field | Value |
|---|---|
| Auth |
Authorization: Bearer <proxy_token> |
| Safety controls |
PLAINTEXT_DEBUG_MODE must be enabled; request Origin must be in allowlist |
| Target URL |
TINFOIL_BASE_URL + /*path |
| Success | Upstream status/body streamed back |
| Errors |
401 invalid token, 403 debug disabled or origin not allowed, 503 missing API key, 502 proxy failed |
POST /api/proxy/verify/session
| Field | Value |
|---|---|
| Auth | Proxy bearer token or browser session |
| Request body |
{"session_nonce": "<base64url>", "transport_mode": "secure_ehbp", "enclave_url": "https://...", "conversation_id": "<optional>"} |
| Validation |
session_nonce required (min 16 random bytes), transport_mode must be secure_ehbp, enclave_url required (HTTPS) |
| Success |
200 with {"receipt": {<receipt envelope>}} |
| Errors |
400 missing/invalid nonce, 400 unsupported transport, 400 missing/invalid enclave URL, 401 invalid token, 503 verification unavailable |
GET /api/proxy/verify/session/:receipt_id
| Field | Value |
|---|---|
| Auth | Same as creation |
| Success |
200 with {"receipt": {<receipt envelope>}} |
| Errors |
401 invalid token, 404 not found, 410 expired |
POST /prove/forwarding-challenge (on the ZK sidecar, default port 8091)
| Field | Value |
|---|---|
| Request body |
{"payload": {<canonical forwarding challenge payload>}} |
| Payload fields |
version, proxy_kernel_id, session_nonce, user_binding, transport_mode, enclave_url_hash, challenge_id, method, path_family, inbound_body_hash, outbound_body_hash, ehbp_header_hash, forwarded_headers_hash, issued_at, verification |
| Success |
200 with {"proof_type": "zk_forwarding_challenge", "proxy_kernel_id": "risc0:image:<hex>", "backend": {...}, "proof": {"system": "risc0", "receipt": "pa_zk_forwarding_challenge_v1:<hex>", "public_inputs": {...}}} |
| Errors |
400 invalid payload fields, 400 unsupported transport mode, 400 policy mismatch, 500 proof generation failure |
POST /verify/forwarding-challenge (on the ZK sidecar, default port 8091)
| Field | Value |
|---|---|
| Request body |
{"receipt": "pa_zk_forwarding_challenge_v1:<hex>", "expected_kernel_id": "risc0:image:<hex>"} |
| Success |
200 with {"verified": true} |
| Errors |
400 invalid receipt artifact, 400 kernel ID mismatch, 400 journal mismatch, 500 verification failure |
| Direction | Header | Behavior |
|---|---|---|
| Incoming |
Authorization (proxy token) |
Validated by plug, stripped before forwarding |
| Outgoing |
Authorization (API key) |
Injected by proxy with TINFOIL_API_KEY |
| Incoming |
X-Tinfoil-Enclave-Url |
Validated (HTTPS), used for routing, not forwarded |
| Incoming |
Ehbp-Encapsulated-Key |
Required on secure path with body, forwarded to enclave |
| Upstream response |
Ehbp-Response-Nonce |
Forwarded to browser |
| Incoming | All hop-by-hop headers | Stripped before forwarding |
| Incoming |
Cookie |
Stripped before forwarding |
secure_ehbp mode are EHBP ciphertext – the proxy streams them
without interpretation.
Document generated from source code analysis of the Shield NOYB codebase, March 2026.
Repository: private_assistant. Primary source files referenced throughout.