October 16-20 2023 | Hybrid | Sydney, Australia

DEMOS

Location: Gallery 1&2
Day 2: 17 October 2023 (Tuesday) | Time: 12:30 pm - 17:30 pm

D1 – Testbed for Intuitive Magnification in Augmented Reality

Ryan Schubert, Gerd Bruder, Greg Welch

Different technologies have been introduced to magnify portions of our human visual field. A promising technology are modern high-resolution digital cameras, which are very flexible as their imagery can be presented to users in real-time with mobile or head-mounted displays and intuitive 3D user interfaces allowing control over the magnification. In this demo, we present a novel design space and testbed for augmented reality (AR) magnifications, where a head-mounted display is used for the presentation of real-time magnified camera imagery. Different unimanual and bimanual AR interfaces are introduced for defining the magnified imagery in the user’s visual field.

D2 – Augmented Reality-based demo for Immersive training in horticultural therapy

Alessandro Luchetti, Stefania Zaninotto, Mariolino De Cecco, Giovanni Maria Achille Guandalini, Yuichiro Fujimoto, Hirokazu Kato

This demo leverages the innovative concept of Shared Augmented Reality for immersive training in horticultural therapy. The setup involves both the therapist and the patient wearing HoloLens. The therapist uses a smartphone to set the parameters of a virtual environment displacing virtual pots. The patient, wearing HoloLens, waters the virtual pots using a real watering can (without water) to grow virtual flowers. This interactive and immersive experience aims to improve the patient’s skills in daily living activities while training physical and cognitive functions. From the therapist’s perspective, this framework provides objective assessment parameters while incrementing the clinical eye.

D3 – InstantCopresence: A Spatial Anchor Sharing Methodology for Co-located Multiplayer Handheld and Headworn AR

Botao Hu, Yuchen Zhang, Sizheng Hao, Yilan Tao

We proposed a simple yet instant methodology InstantCopresence for synchronizing co-located multiplayer AR sessions across handheld and headworn devices. This process makes multiplayer setup effortless, enabling users to scan another device’s QR code and instantly enter a shared mixed reality experience, without the need for an internet connection or previous spatial map scans, thanks to the local networking technology behind AirDrop. We further showcased the capabilities of our method by demonstrating an asymmetric role-playing multiplayer game called Dragon Hunting, supporting up to 8 players with diverse roles, thus highlighting the potential and simplicity of our proposed methodology.

D4 – Empowering Education through EERP: A Customizable Educational VR Escape Room Platform

Ali Darejeh

Efficient programming learning necessitates practical knowledge, creativity, and problem-solving skills. However, current methods often disregard these crucial elements, focusing solely on coding principles. As a result, student engagement is compromised. To mitigate this, game-based learning, especially through virtual reality, proves effective in enhancing outcomes and motivation. Addressing this gap, we have developed a pioneering VR-based escape room platform for learning programming. This innovation empowers educators to tailor rooms and puzzles to their curriculum, while its multiplayer feature fosters collaboration, intensifying the learning journey. By seamlessly integrating immersive learning with personalized content, our platform can revolutionize programming education.

D5 – MRLabeling: Create RGB-D Datasets On-The-Fly using Mixed Reality

Richard Nguyen, Rani Baghezza, Benjamin Georges, Charles Gouin-Vallerand, Kevin Bouchard, Maryam Amiri

Demo of the poster submission MRLabeling: Create RGB-D Datasets On-The-Fly using Mixed Reality. We present MRLabeling, an application developped for the Microsoft Hololens 2 that allows the easy creation and annotation of datasets directly in Mixed Reality. We first describe the design of the system, the way 3D bounding boxes drawn by the user are projected in 2D to create annotated images on the fly, and the use of segmentation algorithms to go beyond bounding boxes. After that, we explore the use of depth data, and the current limitations of the system, as well as avenues for future work.

D6 – Networking AI-Driven Virtual Musicians in Extended Reality

Torin Hopkins, Rishi Vanukuru, Che Chuan Weng, Chad Tobin, Amy Banic, Mark D Gross, Ellen Yi-Luen Do

Music technology has embraced Artificial Intelligence as part of its evolution. This work investigates a new facet of this relationship, examining AI-driven virtual musicians in networked music experiences. Responding to an increased popularity due to the COVID-19 pandemic, networked music enables musicians to meet virtually. This work extends existing research that has focused on networked human-human interaction by exploring AI-driven virtual musicians. This pilot aims to open opportunities for improving networked musical experiences with virtual AI-driven musicians and informs directions for future studies with the system.

D7 – MaskWarp: Visuo-Haptic Illusions in Mixed Reality using Real-time Video Inpainting

Brandon J Matthews, Carolin Reichherzer, Bruce H Thomas, Ross Smith

We present MaskWarp, a novel approach to enable haptic retargeting and other visuo-haptic illusions in video see-through (VST) mixed reality (MR). Virtual reality provides a unique ability to shift where users perceive their virtual hands. However, in VST MR, users can see their real hands through a video feed, breaking the illusion. MaskWarp uses real-time video inpainting to remove the physical hand from the video, which is then replaced with a virtual hand. The virtual hand position can then be warped for visuo-haptic illusions. We use haptic retargeting to demonstrate this technique in an engaging cups and ball game.

D8 – Volumetric X-ray Vision Using Illustrative Visual Effects

Thomas John Clarke, Wolfgang Mayer, Joanne Zucco, Adam Drogemuller, Ross Smith

For our prototype presented in this work, we invite participants to view several objects’ internal structures using direct volumetric rendering utilizing X-ray vision techniques to fully understand the underlying structure. All while using an immersive Ocular See Through (OST) Augmented Reality (AR) Headset, a HoloLens 2. Volumetric Rendering and X-ray Vision are two methods that can show the internal structure of a real-world object. We present a platform capable of generating volumetric datasets within a range of situations and accurately demonstrate two Illustrative Visual Effects (Stippling and Cross Hatching) that can be used for X-ray vision in Augmented Reality.

D9 – A Novel Split Rendering XR Framework with Occlusion Support

János Dóka, Bálint György Nagy, Dávid Jocha, András Kern, Balazs Peter Gero, Balazs Sonkoly

Extended Reality (XR) applications, especially the revolutionary Mixed Reality technology, with a new level of immersive experiences and novel ways of interactions can reshape our society. However, the required quality of user experience and the desired high visual fidelity pose several challenges. In this demo, we present a novel edge/cloud-based XR platform providing essential features for future applications. During the course of the demo, we highlight the advantages of split rendering and the challenges of occlusion support in dynamic environments. We showcase different solutions, reveal pros/cons of the approaches and analyze their feasibility in different application areas.

D10 – Hybrid Cross Reality Collaborative System

Hyunwoo Cho, Eunhee Chang, Zhuang Chang, Jiashuo Cao, Bowen Yuan, Jonathon D Hart, Gun A. Lee, Thammathip Piumsomboon, Mark Billinghurst

This work introduces a Mixed Reality (MR)-based hybrid cross reality collaborative system that enables recording and playback of user actions in a large task space over time. An expert records their actions, such as virtual object placement, which others can view in AR or VR later to complete a task. VR offers a pre-scanned 3D workspace for spatial understanding, while AR gives real-scale information for manipulating real objects. Users can easily switch between AR and VR views, enhancing task performance and co-presence during asynchronous collaboration. The system is showcased in an object assembly scenario with parts retrieved from a storehouse.

D11 – Point & Portal: A New Action at a Distance Technique For Virtual Reality

Daniel L Ablett, Andrew Cunningham, Gun A. Lee, Bruce H Thomas

This paper introduces Point & Portal, a novel virtual reality (VR) interaction technique, inspired by Point & Teleport. This new technique enables users to configure portals using pointing actions, and supports seamless action at a distance and navigation without requiring line of sight.

D12 – Demo of MultiVibes: a VR Controller that has 10 Times more Vibrotactile Actuactors

Grégoire Richard, Thomas Pietrzak, Ferran Argelaguet Sanz, Anatole Lécuyer, Géry Casiez

Leveraging the unneling effect, an illusion in which multiple vibrations are perceived as a single one, we propose MultiVibes, a VR controller capable of rendering spatialized sensations at different locations on the user’s hand and fingers. The designed prototype includes ten vibrotactile actuators, directly in contact with the skin of the hand, limiting the propagation of vibrations through the controller. We propose three different scenarios to showcase MultiVibes’ spatial feedback. The first one involves recognizing spatio-temporal patterns, the second one is focused on the spatialization when interacting with virtual objects. The final scenario explores free interactions including tool-based interactions.

D13 – SiTAR: Situated Trajectory Analysis for In-the-Wild Pose Error Estimation

Tim Scargill, Ying Chen, Tianyi Hu, Maria Gorlatova

The properties of environments which host AR can cause device pose tracking error, but it is challenging to ascertain which environment regions might result in noticeable errors. Traditional pose tracking evaluations require ground truth pose and do not connect errors with specific regions of the real environment. Here we present our solution, SiTAR (Situated Trajectory Analysis for Augmented Reality), the first situated trajectory analysis system for AR that incorporates estimates of pose tracking error. We show how pose error estimates can be calculated by obtaining multiple trajectory estimates, and aligned with real environments using situated visualizations, to produce actionable insights.

Funding Partner


Conference Sponsors


Silver

Monash FIT Logo

UNSW Logo

Bronze

Qualcomm LogoAutodesk Logo UNSW AI Institute Logo

Friends of ISMAR

Advent2 Labs