Visual Signals

Visual Signals

Facial expressions and body language provide real-time emotional context that complements physiological data. Nefesh accepts pre-extracted visual features — no images or video are ever sent to the API.

Accepted Fields

FieldTypeValuesDescription
expressionstringrelaxed, neutral, tense, frowningClassified facial expression
gazestringsteady, averted, dartingGaze pattern — averted/darting may indicate discomfort
posturestringupright, slouched, tense, leaning_awayBody posture classification
engagementfloat0.0 - 1.0Visual engagement score (eye contact, face orientation)

How It Works

You extract visual features on the client side using any face/pose detection library (e.g., MediaPipe, OpenCV) and send the classified results. Nefesh fuses these with other signal categories to refine the stress score.

Visual signals are particularly valuable in video call and telehealth scenarios where a camera is already active.

Example Payload

{
  "session_id": "sess_abc123",
  "timestamp": "2026-03-30T14:30:00Z",
  "expression": "tense",
  "gaze": "averted",
  "engagement": 0.35
}

Privacy

Nefesh never receives images, video frames, or facial landmarks. Only the classified labels (e.g., "tense", "averted") are transmitted. All visual processing happens on the client device.