Cognitive Compute Router
The gateway adapts any LLM's behavior based on real-time biometric state. Change your LLM base URL to gateway.nefesh.ai. Zero code changes on the app side.
Three Integration Modes
| Mode | Endpoint | Input | Output | Use Case |
|---|---|---|---|---|
| OpenAI-compatible | POST /v1/chat/completions | OpenAI format | OpenAI format | OpenAI SDK customers (any LLM backend) |
| Anthropic passthrough | POST /v1/messages | Anthropic format | Anthropic format | Anthropic SDK customers calling Claude |
| Unified Anthropic | POST /v1/messages | Anthropic format | Anthropic format | Anthropic SDK customers calling any other LLM |
Quick Start
OpenAI SDK (Mode 1)
curl https://gateway.nefesh.ai/v1/chat/completions \
-H "X-Nefesh-Key: YOUR_KEY" \
-H "X-Nefesh-Subject: usr_demo" \
-H "X-LLM-Key: YOUR_LLM_KEY" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'
Anthropic SDK (Mode 2)
from anthropic import Anthropic
client = Anthropic(
api_key="YOUR_ANTHROPIC_KEY",
base_url="https://gateway.nefesh.ai"
)
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
Anthropic format to Gemini (Mode 3)
curl https://gateway.nefesh.ai/v1/messages \
-H "X-Nefesh-Key: YOUR_KEY" \
-H "X-Nefesh-Subject: usr_demo" \
-H "X-LLM-Backend: https://generativelanguage.googleapis.com/v1beta/openai" \
-H "X-LLM-Key: YOUR_GEMINI_KEY" \
-d '{"model":"gemini-2.5-flash","max_tokens":256,"messages":[{"role":"user","content":"Hello"}]}'
Supported LLM Providers
| Provider | X-LLM-Backend |
|---|---|
| OpenAI (default) | https://api.openai.com |
| Anthropic Claude (native) | https://api.anthropic.com |
| Google Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
| Mistral | https://api.mistral.ai |
| DeepSeek | https://api.deepseek.com |
| Groq | https://api.groq.com/openai |
| Together AI | https://api.together.xyz |
| Fireworks AI | https://api.fireworks.ai/inference |
| Ollama (self-hosted) | http://your-server:11434 |
Request Headers
| Header | Required | Description |
|---|---|---|
X-Nefesh-Key | Yes | Nefesh API key |
X-Nefesh-Subject | Recommended | Subject ID from Device Registry |
X-Nefesh-Session | Legacy | Session ID (still supported) |
X-LLM-Key | Yes | Your LLM provider API key |
X-LLM-Backend | No | LLM base URL (defaults to OpenAI) |
Response Headers
| Header | Description |
|---|---|
X-Nefesh-Adapted | true if biometric context was injected |
X-Nefesh-State | Current state (calm, relaxed, focused, stressed, acute_stress) |
X-Nefesh-Backend | Routing path used |
Device Registry
Register wearables once. No session management needed.
# Register device (once)
curl -X POST https://api.nefesh.ai/v1/devices \
-H "X-Nefesh-Key: YOUR_KEY" \
-d '{"device_name":"Polar H10","device_type":"polar_h10","subject_id":"usr_demo"}'
# Ingest with device_id
curl -X POST https://api.nefesh.ai/v1/ingest \
-H "X-Nefesh-Key: YOUR_KEY" \
-d '{"device_id":"dev_xxx","heart_rate":92,"timestamp":"..."}'
# Gateway reads state by subject
# Header: X-Nefesh-Subject: usr_demo
Graceful Degradation
No biometric signal available? The request passes through unchanged. No crash, no error. Response header X-Nefesh-Adapted: false.
Privacy
- No prompts or responses are logged
- LLM API keys are forwarded in-memory, never stored
- Biometric state is read-only
- GDPR compliant
Open source on GitHub
Browse the source, file issues, or contribute to the project.
github.com/nefesh-ai/nefesh-gateway →