Digital Avatars

Digital Avatars & Virtual Companions

Digital avatars and AI companions feel more authentic when they respond to the user's emotional state — not just their words. Nefesh enables avatars to mirror, react to, and adapt based on real human biometrics.

The Problem

Current digital avatars have scripted emotional responses. They smile when the user says something positive and look concerned when the user says something negative. This text-only approach misses the 93% of communication that is non-verbal.

How Nefesh Helps

  1. Physiological mirroring — avatar body language adjusts based on user's real stress level
  2. Authentic reactions — avatar notices when the user is stressed (even if they say "I'm fine") and responds with appropriate empathy
  3. Adaptive conversation — avatar shortens responses, speaks more softly, or pauses when user stress is high
  4. Engagement tracking — avatar detects disengagement and re-engages proactively

Signal Sources

  • Camera rPPG — heart rate from the device camera, no wearable needed
  • Face tracking — expression and gaze from the existing face tracking pipeline
  • Voice — tone classification from the microphone

Example

User talks to AI companion about their day.

Avatar internally:
1. get_human_state → stress_score: 64, rising trend
2. Avatar leans forward slightly, softens expression
3. "You sound like today was a lot. Want to talk about it,
    or would you rather just unwind?"