Multimodal Interaction Experiments
Exploration of gesture, voice, and gaze-based interactions for next-gen interfaces. Created prototypes demonstrating spatial UI, predictive interactions, and seamless mode-switching. Built with Unity and custom ML models.

Project Overview
Exploration of gesture, voice, and gaze-based interactions for next-gen interfaces. Created prototypes demonstrating spatial UI, predictive interactions, and seamless mode-switching. Built with Unity and custom ML models.
Challenge
Creating a high-performance design that drives conversion while maintaining brand consistency and user clarity across multiple touchpoints.
Solution
Developed a modular design system with optimized user flows, clear visual hierarchy, and performance-driven layouts. Conducted A/B testing to validate design decisions and iterate based on real user data.
Results
- Improved conversion rates through optimized user flows
- Enhanced brand consistency across all campaign assets
- Scalable design system for future campaigns
- Positive user feedback on interface clarity