Researchers identify best method for teaching children how to use AR

Source Node: 1431731

The researchers, based at the University of Texas at San Antonio (UTSA), said a major barrier into the wider adoption of the technology for experiential learning is based on AR designs geared toward adults that rely on voice or gesture commands.

By conducting classroom testing among elementary school students, the UTSA team found that AR programs are best delivered using controller commands, followed by programs that communicate with age-specific language.

“The majority of AR programs urge users to speak commands such as ‘select’, but a child doesn’t necessarily communicate in this manner,” said John Quarles, co-author and associate professor in the UTSA Department of Computer Science.

“We have to create AR experiences that are designed with a child in mind. It’s about making experiential learning grow and adapt with the intended user,” Quarles continued, stressing that currently, many voice commands are built to recognise adult voices but not those of children.

Quarles, working alongside research assistant Brita Munsinger, designed the research study to replace more complex word instructions with easier commands that would be best understood by the younger subjects. This, therefore, allowed the children to reduce time and error in completing a series of tasks.

“One of my favourite parts of working in human-computer interaction is the impact your work can have. Any time someone uses technology, there’s an opportunity to improve how they interact with it,” said Munsinger. “With this project, we hope to eventually make augmented reality a useful tool for teaching STEM subjects to kids.”

The study was conducted in classrooms with children aged between nine to 11-years-old who wore Microsoft HoloLens headsets and were then asked to complete a series of tasks.

The team found in their analysis that these young students exhibited less error, fatigue and higher usability when interaction with AR was based on completing tasks that relied on hardware controllers. Meanwhile, voice and gesture selection both took longer than controller selection.

It also found that children fatigue levels were highest when participants had to make gesture commands. Moreover, this modality was the least usable interaction, while the controller was rated highest on usability.

“We hope this study will serve as a launching point to improve the future immersive learning tools in our classrooms,” said Quarles, whose areas of focus include human-computer interaction virtual, augmented, and mixed realities.

According to a 2019 Deloitte report on the state of AR, investments into this segment of digital reality will be led by the US and is estimated to be over $3.5bn (£2.7bn).

Time Stamp:

More from E&T