Visual Signals
Facial expressions and body language provide real-time emotional context that complements physiological data. Nefesh accepts pre-extracted visual features — no images or video are ever sent to the API.
Accepted Fields
| Field | Type | Values | Description |
|---|---|---|---|
expression | string | relaxed, neutral, tense, frowning | Classified facial expression |
gaze | string | steady, averted, darting | Gaze pattern — averted/darting may indicate discomfort |
posture | string | upright, slouched, tense, leaning_away | Body posture classification |
engagement | float | 0.0 - 1.0 | Visual engagement score (eye contact, face orientation) |
How It Works
You extract visual features on the client side using any face/pose detection library (e.g., MediaPipe, OpenCV) and send the classified results. Nefesh fuses these with other signal categories to refine the stress score.
Visual signals are particularly valuable in video call and telehealth scenarios where a camera is already active.
Example Payload
{
"session_id": "sess_abc123",
"timestamp": "2026-03-30T14:30:00Z",
"expression": "tense",
"gaze": "averted",
"engagement": 0.35
}
Privacy
Nefesh never receives images, video frames, or facial landmarks. Only the classified labels (e.g., "tense", "averted") are transmitted. All visual processing happens on the client device.