• Platforms
Platforms
Immersive AI Systems Group

About

Immersive AI Systems Group

Immersive AI

Immersive AI systems merge sensing, movement, and real-time generative feedback into adaptive environments for therapy, creativity, and exploration. Designed to respond to people in intuitive and embodied ways, they read motion, emotion, and expression, transforming them into dynamic, interactive worlds.

Beyond traditional interfaces, we create room-scale environments and VR worlds where presence, perception, and imagination converge — living dialogues between human and machine that foster play, learning, and restoration.

Our work spans the Neurotechnology Warehouse and the Digital Neurotherapeutics Clinic at the Champalimaud Center for the Unknown — from prototyping and experimentation in the Warehouse to the creation of clinical Immersion Rooms that bring these technologies into therapeutic use.
 

INFO

Get in touch

 

Write to us to collaborate with us, or to find out more about our work: immersive_ai@research.fchampalimaud.org

 

Team

Immersive AI Systems Group

Know our team

Niklas Fricke

Niklas Fricke, PhD

Senior AI Scientist

Johannes Stelzer, PhD

Senior AI Scientist

Fatemeh Molaei

Fatemeh Molaei, PhD

Senior AI Scientist

Alexander Loktyushin

Alexander Loktyushin, PhD

Senior AI Scientist

Projects

Immersive AI Systems Group

Discover our projects

Here are some of our ongoing projects focusing on immersive environments using Virtual Reality.

Technologies

Immersive AI Systems Group

Motion tracking

Discover our Tools

Our systems integrate real-time 3D motion tracking, immersive VR environments, spatial audio, and generative AI. We use high-precision multi-sensor capture to reconstruct the body in motion, enabling motion fingerprinting — fine-grained analysis of motor patterns and individual movement signatures. Eye-tracking, haptic feedback, and biometric feedback give insight into  the psychological and cognitive states of the user.

Data streams are processed online by multimodal AI architectures, combining vision, motion, sound, and language understanding. Large language models provide contextual reasoning, while generative networks synthesize audiovisual feedback in real time.

On the output side, adaptive VR rendering, real-time video generation, AI-generated music, and spatially reactive soundscapes create environments that evolve with the user. Gaussian splatting supports rapid 3D reconstruction and rendering, and 3D animation with rigged characters enables expressive avatars and interactive agents.

Our systems operate as a continuous loop of perception, reasoning, and generation — sensing human signals, interpreting intent, and producing personalized immersive feedback for research, therapy, and creative exploration.
 

Art & AI

Immersive AI Systems Group

Therapeutic Synergies

Artistic collaboration is integral to our research. Working with artists allows us to explore the expressive dimensions of our technologies — turning experimental systems into immersive installations, performances, and concerts that reveal how people feel, move, and respond within AI-driven environments.

These projects act as open laboratories, where the same real-time perception and generative systems used in therapy are recontextualized as creative instruments. Public exhibitions such as Metamersion or large-scale immersive concerts invite audiences to experience AI not as a hidden process, but as an interactive, perceptual partner.

Artistic engagement produces valuable data on human–AI interaction — how people adapt, play, and express themselves in adaptive spaces. It also closes a feedback loop between creative experimentation and clinical application, helping us refine motion-based therapies, improve interface design, and inspire new directions for immersive neurotechnology.
 

Loading
Please wait...