Cognitive Compute Router (Gateway)

Cognitive Compute Router

The gateway adapts any LLM's behavior based on real-time biometric state. Change your LLM base URL to gateway.nefesh.ai. Zero code changes on the app side.

Three Integration Modes

ModeEndpointInputOutputUse Case
OpenAI-compatiblePOST /v1/chat/completionsOpenAI formatOpenAI formatOpenAI SDK customers (any LLM backend)
Anthropic passthroughPOST /v1/messagesAnthropic formatAnthropic formatAnthropic SDK customers calling Claude
Unified AnthropicPOST /v1/messagesAnthropic formatAnthropic formatAnthropic SDK customers calling any other LLM

Quick Start

OpenAI SDK (Mode 1)

curl https://gateway.nefesh.ai/v1/chat/completions \
  -H "X-Nefesh-Key: YOUR_KEY" \
  -H "X-Nefesh-Subject: usr_demo" \
  -H "X-LLM-Key: YOUR_LLM_KEY" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hello"}]}'

Anthropic SDK (Mode 2)

from anthropic import Anthropic

client = Anthropic(
    api_key="YOUR_ANTHROPIC_KEY",
    base_url="https://gateway.nefesh.ai"
)

message = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)

Anthropic format to Gemini (Mode 3)

curl https://gateway.nefesh.ai/v1/messages \
  -H "X-Nefesh-Key: YOUR_KEY" \
  -H "X-Nefesh-Subject: usr_demo" \
  -H "X-LLM-Backend: https://generativelanguage.googleapis.com/v1beta/openai" \
  -H "X-LLM-Key: YOUR_GEMINI_KEY" \
  -d '{"model":"gemini-2.5-flash","max_tokens":256,"messages":[{"role":"user","content":"Hello"}]}'

Supported LLM Providers

ProviderX-LLM-Backend
OpenAI (default)https://api.openai.com
Anthropic Claude (native)https://api.anthropic.com
Google Geminihttps://generativelanguage.googleapis.com/v1beta/openai
Mistralhttps://api.mistral.ai
DeepSeekhttps://api.deepseek.com
Groqhttps://api.groq.com/openai
Together AIhttps://api.together.xyz
Fireworks AIhttps://api.fireworks.ai/inference
Ollama (self-hosted)http://your-server:11434

Request Headers

HeaderRequiredDescription
X-Nefesh-KeyYesNefesh API key
X-Nefesh-SubjectRecommendedSubject ID from Device Registry
X-Nefesh-SessionLegacySession ID (still supported)
X-LLM-KeyYesYour LLM provider API key
X-LLM-BackendNoLLM base URL (defaults to OpenAI)

Response Headers

HeaderDescription
X-Nefesh-Adaptedtrue if biometric context was injected
X-Nefesh-StateCurrent state (calm, relaxed, focused, stressed, acute_stress)
X-Nefesh-BackendRouting path used

Device Registry

Register wearables once. No session management needed.

# Register device (once)
curl -X POST https://api.nefesh.ai/v1/devices \
  -H "X-Nefesh-Key: YOUR_KEY" \
  -d '{"device_name":"Polar H10","device_type":"polar_h10","subject_id":"usr_demo"}'

# Ingest with device_id
curl -X POST https://api.nefesh.ai/v1/ingest \
  -H "X-Nefesh-Key: YOUR_KEY" \
  -d '{"device_id":"dev_xxx","heart_rate":92,"timestamp":"..."}'

# Gateway reads state by subject
# Header: X-Nefesh-Subject: usr_demo

Graceful Degradation

No biometric signal available? The request passes through unchanged. No crash, no error. Response header X-Nefesh-Adapted: false.

Privacy

  • No prompts or responses are logged
  • LLM API keys are forwarded in-memory, never stored
  • Biometric state is read-only
  • GDPR compliant

Open source on GitHub

Browse the source, file issues, or contribute to the project.

github.com/nefesh-ai/nefesh-gateway →