ÐÓ°ÉÊÓÆµ

Skip to main content

A neuroscientist develops safety protocols for immersive technology

Julia Scott helps researchers safeguard participants’ well-being and data privacy during testing through new ethics guidelines and case studies.
November 25, 2025
By Nic Calande
A man wearing a VR headset uses hand controllers to control a robot in a lab setting.

Imagine a volunteer donning a virtual-reality headset for a stress-management study. They enter a simulated emergency—alarms blaring, smoke rising. Their breathing quickens and their heart pounds. The experience is designed to teach resilience under pressure, but without proper consent and a clear way to exit the simulation, such experiments can unintentionally trigger panic attacks or psychological distress.

In another lab, a different kind of risk emerges. As researchers collect motion data from commercial biofeedback devices—like smart watches or glasses—they discover that anonymized recordings of how someone walks, reaches, or tilts their head can be used to re-identify them later, exposing participants to privacy breaches that few consent forms currently address.

As immersive technologies like VR, augmented and extended reality (AR and XR), and brain–computer interfaces accelerate across healthcare, education, and neuroscience, they promise breakthroughs in learning and therapy—but also create new ethical concerns. Immersive experiences are active, embodied engagements that are more like real memories than passive observation or detached interaction, like other digital media. Emerging systems that stimulate neural or sensory feedback also raise unanswered questions about physical safety and autonomy.

At ÐÓ°ÉÊÓÆµ’s Markkula Center for Applied Ethics, adjunct assistant professor of bioengineering Julia Scott is leading efforts to address the risks from tools that blur the boundaries between mind, body, and machine.

“Honestly, I realized there was a need for this kind of research because of a mismatch between the risk assessment by the Institutional Review Boards (IRBs) and my own,” Scott explains. “In the review of my protocols, I thought the wrong questions were being asked regarding human subjects' protections because these risks are so novel.”

Leaning on her academic and industry work leading Santa Clara’s Brain and Memory Care Lab and advising X Reality Safety Intelligence (XRSI), she and her research partners, former XRSI Chief-of-Staff, Bhanujeet Chaudhary, and Aryan Bagade MS ’25, developed a new safety guide for IRB committees to consider when reviewing immersive technology research.

This guide was applied to three case studies—each grounded in how immersive technologies are being used in real-world lab research across the U.S. and U.K.

The first, “Control of Physiological Signals Under Stress Using Biofeedback, was based on a Cambridge University research group investigating how participants can learn stress regulation through VR-based biofeedback. Scott’s team used this research scenario to evaluate how informed consent and mental-health safeguards should be utilized when participants are placed in simulated high-stress environments.

Haptic Interface System Using Transcranial Magnetic Stimulation drew from the University of Chicago’s research on using noninvasive brain stimulation to simulate the sense of touch in mixed-reality systems. Scott’s team raised questions about neurological safety, appropriate medical oversight, and how participants should be screened and trained before exposure.

A table exploring the low, medium, and high risks of spatial technologies.

An analysis of the human risks involved in spatial and motion-based VR technologies.

The third case study, “VR Motion Data, Privacy, and Re-identification Risks,“ was based on an Illinois Institute of Technology research project that assessed how users manage real and virtual objects simultaneously in the context of solving a puzzle. Both the original researchers and Scott’s team considered the ethical concerns around using motion-tracking data collected by commercial XR devices as a unique biometric identifier. Scott’s team called attention to the fact that individuals can be identified by position data with deep learning models, prompting calls for new privacy standards, secure data storage requirements, and limits to data re-use.

These case studies surveyed the breadth of risks in XR research, and the team’s proposed safeguards are summarized in their starter safety guide, which includes their “3 C’s of Ethical Consent in XR”—Context, Control, and Choice. The “3C’s” are a practical framework for ensuring that participants understand the immersive experience, maintain agency during studies, and make fully informed decisions about data sharing and future use.

The case studies were tested at the 2024 PRIM&R Conference by more than 100 IRB compliance officers. As many standard review boards lack experience in XR, haptics, or neurotechnology, Scott’s hope is that this safety guide spreads awareness of these new concerns at the compliance level.

Rooted in the value of cura personalis—care for the whole person—Scott’s goal is not only compliance, but also a culture of care in the age of AI and XR that prevents harm and increases trust.

“In April, the Markkula Center hosted Digital Dignity Day, acknowledging how much of our lives are represented in the digital domain,” Scott recalls. “We need an adapted understanding of dignity in these complex XR dimensions that are quickly scaling and being normalized. Establishing responsible use at every step, from pioneering research to commercialization, is an imperative for human dignity.”

Related Stories