October 16-20 2023 | Hybrid | Sydney, Australia

TUTORIALS

T1 – Impactful XR: Identifying and Designing for High-Impact Use Cases

Location: Mat 307
Day 1: 16 October 2023 (Monday) |
Time: 9:00 am - 12:30 pm, 14:00 pm - 18:00 pm

XR technology has the potential to significantly impact people and the world we live in. Development of XR applications is becoming easier and there are numerous areas that we can focus on. However, simply using technology for technology’s sake does not necessarily add value. Because there is now such a large range of feasible use cases for XR, selecting one’s research and application of the technology can be quite a daunting task. The question then changes from can we utilize XR to identifying where and how we should utilize the technology to have the most real-world impact. This tutorial will help researchers and practitioners determine where and how they should focus their efforts in order to build the most important applications, increase value, maximize their careers, and contribute to making the world a better place.  Participants are encouraged to bring their own use cases and questions to share via QA and discussion.

Jason Jerald, NextGen Interactions

Jason Jerald is CEO of NextGen Interactions, a company focused on XR solutions for high-risk incidents. He has worked on numerous XR projects over the last 25+ years with more than 40 organizations including Meta, Google, Intel, AT&T, General Motors, Valve, Oculus, Virtuix, Sixense, MergeVR, NASA, three U.S. national laboratories, and seven universities. Jason’s work has been featured on ABC’s Shark Tank, on the Discovery Channel, in the New York Times, and on the cover of the MIT Press journal Presence. Jason has authored various publications and patents, most notably the best-selling book ‘The VR Book: Human-Centered Design for Virtual Reality.’

Sharif Razzaque, Sentiar

Sharif Razzaque has been pursuing useful and effective applications of AR/VR for >25 years, including apps for the tele-collaborative design and operation of satellites for Lockheed Martin and the US Air Force and measuring physiological signals in VR to help understand the feeling of presence and social co-presence and how they are affected by technical parameters such as latency and avatar design. Sharif spent 15 years bringing medical ultrasound AR technology from initial concept at Henry Fuchs’ lab to commercialization and daily clinical use in treating liver cancer and uterine fibroids. Dr. Razzaque has worked as a Senior Software Engineer at Microsoft Mixed Reality and Chief Engineer at Medtronic. Sharif is currently Director of R&D for SentiAR, a startup launching an Augmented Reality device for Cardiac Electrophysiology Procedures.

Benjamin Lok, University of Florida

Benjamin Lok is a Professor in the Computer and Information Sciences and Engineering Department at the University of Florida and entrepreneur, having previously co-founded Shadow Health (now a part of Elsevier). Professor Lok’s research focuses on using virtual humans and mixed reality to train communication skills within the areas of virtual environments, human-computer interaction, and computer graphics. His work into virtual humans is being used by millions of learners to improve the patient outcomes across the globe. Professor Lok received a Ph. D. (2002) and M.S. (1999) from the University of North Carolina at Chapel Hill, and a B.S. in Computer Science (1997) from the University of Tulsa. 

Scott Ledgerwood, NIST PSCR

Scott Ledgerwood leads the UIUX portfolio at NIST PSCR. He focuses on improving usability and user interface testing for first responders. His team is developing new test methodologies leveraging VR and AR to enable improved research, testing and development of first responder technologies. He oversees the Public Safety Immersive Test Center located in Boulder, Colorado. The PSITC provides opportunities to conduct immersive public safety standards and measurements testing in a large-scale XR environment.  Over the past 5 years Scott has funded 22 grants, 6 prize challenges, and an SBIR all focused on VR and AR for public safety totaling over $20 million dollars.

Regis Kopper, UNC Greensboro

Regis Kopper is an assistant professor of Computer Science at the University of North Carolina at Greensboro. He directs the Interactive Realities Lab and conducts research on extended reality interfaces and user experience for immersive interactive systems. His work involves interaction design, simulation, and training, with a focus on applications in public safety, healthcare, and the humanities. Regis has received support for his research from various organizations, including NIST, NSF, NIH, DoD, FLAD-Portugal, and FAPESP-Brazil. He earned his B.A. (2004) and M.S. (2006) in Computer Science from PUCRS, Brazil, and his Ph.D. (2011) in Computer Science from Virginia Tech.

Tom Furness, Virtual World Society,

Tom Furness is an amalgam of Professor, Inventor and Entrepreneur in a professional career that spans 53 years. In addition to his contributions in photonics, electro-optics, human interface technology, he is an original pioneer of virtual and augmented reality technology and widely known as the ‘grandfather’ of virtual reality. Tom is currently a professor of Industrial and Systems Engineering with adjunct professorships in Electrical Engineering, Mechanical Engineering and Human Centered Design and Engineering at the University of Washington (UW), Seattle, Washington, USA. He is the founder of the family of Human Interface Technology Laboratories at the University of Washington, Christchurch, New Zealand and Tasmania, Australia. Tom is a Fellow in the IEEE and the founder and chairman of the Virtual World Society, a non-profit for extending virtual reality as a learning system for families and other humanitarian Applications.

T2 – Heads-Up Computing: Designing Seamless Interfaces for Everyday AR

Location: Mat 231
Day 1: 16 October 2023 (Monday) | Time: 14:00 pm - 18:00 pm

Heads-Up Computing (https://www.nus-hci.org/heads-up-computing/), a human-centered approach to augmented reality (AR), aims to seamlessly integrate digital information into our daily tasks, enhancing our ability to access and interact with information in real-time. As this form of AR gains prominence, it is crucial to explore and define best practices for designing interfaces that facilitate intuitive and efficient interactions. This tutorial aims to delve into the nuances of interface design for Heads-Up Computing for everyday AR, discussing key considerations, challenges, and strategies for creating interfaces that seamlessly blend digital information with our daily experiences.

Shengdong Zhao, National Unviersity of Singapore
https://www.shengdongzhao.com/

Shengdong (Shen) ZHAO is an associate professor of the Computer Science Department at the National University of Singapore (NUS). He completed his PhD degree in computer science from the University of Toronto. I founded the NUS-HCI Lab at the National University of Singapore in January 2009. He is passionate about developing new interface tools and applications that can simplify and enrich people’s lives (i.e., Draco, won the best iPad App of the year in 2016). He publishes regularly in top HCI conferences and journals (ToCHI, CHI, UbiComp, CSCW, UIST, IUI, etc.). He is interested in connecting academy results with the industry, and has served as a senior consultant with the Huawei Consumer Business Group. He frequently participates in program committees of top HCI conferences, and has worked as the paper co-chair for the ACM SIGCHI 2019 and 2020 conferences.

Ashwin Ram, National University of Singapore
https://sites.google.com/view/ashwin-ram

Ashwin Ram is a research fellow at the National University of Singapore. He completed his PhD degree at the NUS-HCI Lab advised by Prof. Shengdong Zhao. Prior to this, he completed his Bachelor’s in Electronics and Communication Engineering at NIT-Trichy, India.  His research interests are primarily in the domain of AR/VR. In particular, he has explored how dynamic information (e.g. videos) can be used effectively in everyday mobile multitasking contexts (e.g. while walking) on smart glasses. His vision is to blend digital applications such as video-based e-learning, mental wellbeing etc. more seamlessly within a user’s daily routine. More recently, he is also interested in methods and tools for rapid authoring and UI design evaluation for AR. 

Funding Partner


Conference Sponsors


Silver

Monash FIT Logo

UNSW Logo

Bronze

Qualcomm LogoAutodesk Logo UNSW AI Institute Logo

Friends of ISMAR

Advent2 Labs

T3 – A Beginner’s Guide to Neural Rendering

Location: Ainswth G02
Day 5: 20 October 2023 (Friday) | Time: 9:00 am - 12:30 pm, 14:00 pm - 18:00 pm

You may have heard of NeRFs (Neural Radiance Fields), or neural rendering more generally.  Neural rendering brings together deep learning and computer graphics in order to generate extremely compelling 3D content from a set of 2D images. In this full-day tutorial, we’re going to spend the morning learning the core principles of neural networks, deep learning, and volume rendering in order to prepare ourselves to scale NeRF Mountain. In the afternoon, we’ll dissect the original NeRF paper in detail, explore extensions to the method and advancements in neural rendering, and see a lot of cool examples. We’ll close with a forward-thinking discussion on the opportunities and challenges associated with the use of neural rendering in MR. By the end of this tutorial, you should have a solid grasp of the NeRF method and the underlying technologies, including neural networks and deep learning.

Shohei Mori, Graz University of Technology, Graz, Austria
https://mugichoko445.github.io/

Shohei Mori is a postdoctoral researcher (University Assistant) at the Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Austria, and also a Guest Lecturer (Global) at Keio University, Japan. He received my B.S. (2011), M.S. (2013), and PhD (2016) in engineering from Ritsumeikan University, Japan. Before joining the ICG, he worked as a Research Fellowship for Young Scientists (DC-1) from the Japan Society for the Promotion of Science (JSPS) during his PhD degree, and he started his career as a JSPS Research Fellowship for Young Scientists (PD) at Keio University, Japan.  His research interests lie in Vision, Graphics, Imaging, Displays, and Machine Learning that enable Mixed Reality and understanding user perception under the modified vision. He has also been eager to develop Mixed Reality applications for improving remote collaboration, education, entertainment, and filmmaking.  He has won the best journal paper, conference paper, demo, and presentation awards at international scientific events, including IEEE VR, IEEE ISMAR, KELVAR, and KJMR.

Richard Skarbez, La Trobe University, Melbourne, Australia
https://www.richardskarbez.com

Rick Skarbez is currently a lecturer in the Department of Computer Science and Information Technology at La Trobe University in Melbourne, Australia. His research interests lie primarily in the area of human interaction with immersive technologies, such as virtual / augmented / mixed reality (xR). Within that area, he has broad interests in user interfaces, user experience design, visualization, analytics, and training/education. 

Location: CivEng G1
Day 5: 20 October 2023 (Friday) | Time: 9:00 am - 12:30 pm

In this tutorial, an interdisciplinary team of researchers will describe how to best design and analyze interaction in Virtual and Augmented Reality (VR/AR) systems from different perspectives. Manuela Chessa, an expert in perceptual aspects of human-computer interaction, will discuss how interaction in AR affects our senses, and how misperception issues negatively affect interaction. Several technological solutions to interact in VR and AR will be discussed, and finally, the challenges and opportunities of mixed reality (MR) systems will be analyzed. Fabio Solari, an expert in biologically inspired computer vision, will focus on foveated visual processing for action tasks and on the geometry and calibration of interactive AR. Guido Maiello will present the link between our eyes and hands in the real world with the aim of improving the design of interaction techniques in VR and AR. Nikhil Deshpande, with expertise in telepresence and teleoperation, will discuss how mixed reality can play a role in enhancing the intuitiveness and effectiveness of users in real-time robotic teleoperation. Finally, Dimitri Ognibene will talk about adaptive artificial systems with exploratory skills and active perception capabilities, flexible enough to tackle complex social environments (physical or virtual).  Overall, this unique panel of multidisciplinary researchers will delineate a compelling argument in favor of investigating human cognition and perception in the context of virtual reality AR/VR.

Manuela Chessa, University of Genoa, Italy

Manuela Chessa is Associate Professor in Computer Science at Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering at the University of Genoa. Her research interests focus on developing natural human-machine interfaces based on virtual, augmented, and mixed reality, on the perceptual and cognitive aspects of interaction in VR and AR, on the development of bioinspired models, and the study of biological and artificial vision systems. She studies the use of novel sensing and 3D tracking technologies and visualization devices to develop natural and ecological interaction systems, always having human perception in mind. Recently, she addressed the coherent and natural combination of virtual reality and the real world to obtain robust and effective extended reality systems. She has been the Program chair of the HUCAPP International Conference on Human-Computer Interaction Theory and Applications. She has been the chair of the BMVA Technical Meeting – Vision for human-computer interaction and virtual reality systems, Lecturer of the tutorial Natural Human-Computer-Interaction in Virtual and Augmented Reality, VISIGRAPP 2017, and Lecturer for the tutorial Active Vision and Human Robot Collaboration at ICIAP 2017 and ICVS2019. She has been the keynote speaker of the IEEE VR workshop ReDigiTS “3D Reconstruction, Digital Twinning, and Simulation for Virtual Experiences“. She organized the first four editions of the tutorial CAIVARS at ISMAR 2018, 2020, 2021, and 2022. She is the author of more than 90 papers in international book chapters, journals, and conference proceedings and co-inventor of 3 patents.

Guido Maiello, University of Southampton, UK

Guido Maiello is Lecturer at the School of Psychology of the University of Southampton, UK. Previously he was a Marie Skłodowska-Curie Research Fellow at Justus Liebig University Giessen, in Germany. In 2017, Dr Maiello completed a doctoral program in Visual Neuroscience at University College London. During his doctoral studies Dr Maiello held Visiting Researcher positions at Harvard Medical School, Northeastern University, and the University of Genoa. His doctoral work spanned the topics of human depth perception, motor control, virtual reality technology and clinical vision. The main focus of his research has been to translate findings from basic vision science into meaningful technological and clinical applications. Dr Maiello has authored 26 papers in peer-reviewed scientific journals, >50 abstracts and conference papers in international meetings, and one international patent. He is the Academic Editor at the generalist journal PLOS One, serves as reviewer for >30 international journals, and as grant reviewer for the European Commission, the Natural Sciences and Engineering Research Council of Canada, and the Israel Ministry of Innovation, Science and Technology. Dr Maiello further serves as Committee Member for the Vision Sciences Society Annual Meeting (VSS), the ACM Symposium on Eye Tracking Research & Applications (ETRA), and the International Conference on Human Computer Interaction Theory and Applications (HUCAPP). Dr Maiello’s research program investigates how we employ our sense of sight to guide how we grasp objects. Through targeted investigations of human behaviour in AR and VR he aims to uncover the visuomotor neural computations that allow us to pick up and manipulate objects with our hands.

Dimitri Ognibene, University of Milano-Bicocca, Italy

Dimitri Ognibene is Associate Professor of Human Technology Interaction at University of Milano-Bicocca, Italy. Currently, he is Director of BiConnect, Center for Human Technology Connection Research at Università Milano-Bicocca. His main interest lies in understanding how social agents with limited sensory and computational resources adapt to complex and uncertain environments, how this can induce suboptimal behaviors, such as addiction or antisocial behaviors, and how this understanding may be applied to real life problems.  To this end he develops both neural and Bayesian models and applies them both in physical, e.g. robots, and virtual, e.g. social media, settings. Before joining Milano Bicocca University, he was at the University of Essex as Lecturer in Computer Science and Artificial Intelligence from October 2017 having moved from University Pompeu Fabra (Barcelona, Spain) where he was a Marie Curie Actions COFUND fellow. Previously he developed algorithms for active vision in industrial robotic tasks as a Research Associate (RA) at Centre for Robotics Research, Kings’ College London; He developed Bayesian methods and robotic models for attention in social and dynamic environments as an RA at the Personal Robotics Laboratory in Imperial College London.  He studied the interaction between active vision and autonomous learning in neuro-robotic models as an RA at the Institute of Cognitive Science and Technologies of the Italian Research Council (ISTC CNR). He also collaborated with the Wellcome Trust Centre for Neuroimaging (UCL) to study how to model exploration in the active inference modeling paradigm. He has been Visiting Researcher at Bounded Resource Reasoning Laboratory in UMass and at University of Reykjavik (Iceland) exploring the symmetries between active sensor control and active computation or metareasoning. He obtained his PhD in Robotics in 2009 from University of Genoa with a thesis titled “Ecological Adaptive Perception from a Neuro-Robotic perspective: theory, architecture and experiments” and graduated in Information Engineering at the University of Palermo in 2004. He is associate editor of Cognitive Computation and Systems, handling editor of Cognitive Processing, review-editor for Paladyn – The journal of Behavioral Robotics, Frontiers Bionics and Biomimetics, and Frontiers Computational Intelligence in Robotics, guest associate editor for Frontiers in Neurorobotics and Frontiers in Cognitive Neuroscience. He has been chair of the robotics area of several conferences and workshops.

Fabio Solari, University of Genoa

Fabio Solari is an Associate Professor of Computer Science at the Department of Informatics, Bioengineering, Robotics, and Systems Engineering of the University of Genoa. His research activity concerns the study of visual perception with the aim to design novel bio-inspired artificial vision systems and to develop natural human-computer interactions in virtual and augmented reality. In particular, his research interests are related to computational models of motion and depth estimation, space-variant visual processing, and scene interpretation. Such models are able to replicate relevant aspects of human experimental data. This can help to improve virtual and augmented reality systems in order to provide a natural perception and interaction. Moreover, he is interested in the perceptual assessment of virtual/augmented/extended reality systems, with specific attention to the perceptual and cognitive aspects, and in the development of systems that allow a natural experience and ecological human-computer interactions. He is the principal investigator of three international projects: Interreg Alcotra CLIP “E-Santé/Silver Economy”, PROSOL “Jeune”, and PROSOL “Senior”. He has participated in five European projects: FP7-ICT, EYESHOTS, and SEARISE; FP6-IST-FET, DRIVSCO; FP6-NEST, MCCOOP; FP5-IST-FET, ECOVISION. He is a reviewer for Italian PRIN and FIRB projects, Marie Curie fellowships and ERC. He was general chair of the IEEE International Conference on Image Processing, Applications and Systems in 2020 (IEEE IPAS 2020, online) and in 2022 (IEEE IPAS 2022, 5-7 December 2022, Genova, Italy). He has a pending International Patent Application (WO2013088390) on augmented reality, and two Italian Patent Applications on virtual (No. 0001423036) and augmented (No. 0001409382) reality.

Nikhil Deshpande, Istituto Italiano di Tecnologia (IIT), Italy.

Nikhil Deshpande is is a Researcher and Head of the Vicarios Mixed Reality and Simulations Lab at the Istituto Italiano di Tecnologia (IIT), Italy. His research focuses on improving situational awareness for human users in advanced telerobotics systems, utilizing holistic approaches in mixed reality and robot learning. In particular, keeping user-centered design as the core, Nikhil examines the role that robotics and mixed reality technology can play in simplifying the life of human users operating in diverse and difficult environments, e.g., telesurgery, disaster response, etc. To that end, the following topics are currently being investigated, among others: (1) mixed reality as an intuitive and immersive medium for remote telerobotics, combining robot digital twins with ego- & exo-centric remote visualization; (2) intuitive and natural gesture mapping between human gestures and robot motions, independent of user viewpoint; (3) intelligent streaming and gaze-contingent rendering of real-time 3D visualization data in immersive remote telerobotics to reduce bandwidth and latency using deep learning and semantic scene understanding; (4) immersive multi-user mixed reality with real-time 3D reconstruction of remote environments, combined with real-time encountered-type haptic feedback. The outcomes of these investigations are directly applied in nationally funded projects that Nikhil has coordinated, including “Sistemi Cibernetici Collaborativi – Robot Teleoperativo” (3 + 3 years; €3.16 + €2.0 mil) and “Sistemi Cibernetici Collaborativi – Cadute dall’Alto” (3 years; €2.0 mil). Previously, Nikhil focused on the design, development, and evaluation of novel surgical devices, surgeon-machine telesurgery interfaces, and controllers for robot-assisted surgeries. Nikhil received his Ph.D. in Electrical Engineering from North Carolina State University (NCSU), USA in 2012, investigating intelligent autonomous navigation, leveraging interaction with wireless sensor networks. Nikhil has successfully organized workshops at leading robotics conferences, including IEEE/RSJ IROS, ICAR, etc., and given invited talks at international events and universities, including KAIST, Ewha Womans Univ, IITs-India, etc.

T5 – Welcome to the Machine: Industrial Scenarios Redefined with eXtended Technologies

Location: CivEng G8
Day 5: 20 October 2023 (Friday) | Time: 9:00 am - 12:30 pm

This tutorial will present essential concepts associated with the use of eXtended Reality (XR) technologies for Industry scenarios from a Human-Centered Design (HCD) perspective.  Afterward, an overview of existing work described in literature will be presented, allowing the audience to grasp the historical perspective from where XR started to where it is now, highlighting what has been changing recently to increase the larger adoption being happening worldwide. This is extremely important given the growing number of prototypes.  Thus, the path to achieving usable, realistic, and impactful solutions must entail an explicit understanding of how XR can assist with industry scenarios and how it may help contribute to a more effective work effort. To help achieve this vision, various research projects conducted by the authors in recent years will be described, including work associated with assembly, picking, logistics, maintenance, quality control, remote collaboration, and others.  The tutorial will end with a call for action, illustrating various important topics that should be addressed to increase the level of maturity of the field.  Finally, we intend to have a period for discussion, in which the attendees may express their questions and opinions, including interesting topics for future research.

Bernardo Marques, University of Aveiro

Bernardo is a Research Assistant at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies, with a focus on computer-supported cooperative work, computergraphics, extended reality, as well as information visualization, with a particular interest in Industrial scenarios. Previous to this, he worked in the Industry sector for various years as a technician.

Samuel Silva, University of Aveiro

Samuel is an Assistant Professor at the Department of Electronics, Telecommunications and Informatics (DETI), as well as a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies with a focus on multimodal interaction, computer vision, computer graphics, extended reality, as well as information visualization, with a particular interest in Industrial scenarios.

Carlos Ferreira, University of Aveiro

Carlos Ferreira is an Associate Professor at the Department of Economics, Management, Industrial Engineering and Tourism (DEGEIT), as well as a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies with a focus on information systems, operational research, statistical data analysis, as well as extended reality, data and information visualization, with a particular interest in Industrial scenarios.

Tiago Araújo, University of Aveiro

Tiago is a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies, with a focus on information visualization, computer graphics, extended reality, artificial intelligence, machine learning, deep learning, and explainability, with a particular interest in Industrial scenarios.

Paulo Dias, University of Aveiro

Paulo is an Assistant Professor at the Department of Electronics, Telecommunications and Informatics (DETI), as well as a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies with a focus on extended reality, computer vision, computer graphics, 3D reconstruction, as well as data and information visualization, with a particular interest in Industrial scenarios.

Beatriz Sousa Santos, University of Aveiro

Beatriz is an Associate Professor at the Department of Electronics, Telecommunications and Informatics (DETI), as well as a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). Her interests include human-centered technologies with a focus on extended reality, as well as data and information visualization, with a particular interest in Industrial scenarios.

T6 – From Ideas to Reality – Accelerating Mixed Reality Prototyping with AI

Location: Mat 231
Day 1: 16 October 2023 (Monday) | Time: 9:00 am - 12:30 pm

The tutorial addresses several key research issues in the field of mixed reality and AI. Some of these issues include:

    1. Intelligent prototyping tools: Leveraging current AI-powered tools and frameworks that can enhance the efficiency and productivity of mixed reality prototyping. This includes creating systems that enable real-time feedback, automatic optimization, and intelligent suggestions to designers and developers during the prototyping phase. We would be using GitHub Co-pilot for quick prototyping.
    2. AI-assisted content creation: Exploring how AI can be leveraged to automate or assist in the creation of 3D assets, animations, sound textures and interactive elements for mixed reality experiences.
    3. Data-driven design decisions: Examining how AI can analyze user data, feedback, and behavior patterns to inform design decisions and optimize mixed reality experiences. This involves exploring techniques such as user modeling, sentiment analysis, and predictive analytics

Suranga Nanayakkara, National University of Singapore
http://www.suranga.info/

Suranga Nanayakkara is an Associate Professor at the Department of Information Systems & Analytics, School of Computing, National University of Singapore (NUS). In 2011, he founded the “Augmented Human Lab” to explore ways of designing intelligent human-computer interfaces that extend the limits of our perceptual and cognitive capabilities. Suranga is a Senior Member and a distinguished speaker of ACM and has been involved in a number of roles, including General Chair of Augmented Human Conference in 2015 and on many review and program committees including SIG CHI, TEI and UIST. He has won many awards including young inventor under 35 (TR35 award) in the Asia Pacific region by MIT TechReview, Outstanding Young Persons of Sri Lanka (TOYP), and INK Fellowship 2016.

Prasanth Sasikumar, National University of Singapore
https://www.prasanthsasikumar.com/

Prasanth recently joined the Augmented Human Lab at the National University of Singapore (NUS) as a Research Fellow. With a strong focus on augmented reality (AR) and virtual reality (VR), Prasanth’s research interests revolve around exploring the potential of these immersive technologies.  His expertise lies in leveraging multimodal input and remote collaboration to enhance user experiences in AR and VR environments.