IEEE International conference on

ARTIFICIAL INTELLIGENCE & VIRTUAL REALITY

December 12-14, 2022 - Virtual (with satellite events)

< Go back to home page
IEEE Computer Society

IEEE AIVR 2022 program

The conference will feature a 5-hour time slot each day containing a keynote speed (1h), a poster session (2h), and an oral session (2h). This time slot will be at a different time each day to enable every participant to attend at least one day during their normal "awake" time. Times below are CET. See here for an overview of the program across different time zones.


Monday, December 12, 2022 (+/- 1 depending on time zone, see overview)


Tuesday, December 13, 2022 (+/- 1 depending on time zone, see overview)


Wednesday, December 14, 2022 (+/- 1 depending on time zone, see overview)


Keynotes (main conference)

Sylvia Xueni Pan
Sylvia Xueni Pan
Goldsmiths, University of London
(UK)

Title: Social Interaction in VR

Abstract: Amongst all human activities, social interaction is one of the most complex and mysterious. Those with better social skills easily thrive in society, whilst those suffering from social function deficits (e.g., social anxiety disorder) struggle with everyday activity. The ultimate goal of my research is to use VR to improve how we socially connect and communicate with each other in a face-to-face setting. In this talk, I will give several examples of how we use VR for applications in medical communication training, social neuroscience research, and commercial narrative games. I will end my talk with a discussion of the future of social interactions in the Metaverse.

Bio: Prof Sylvia Xueni Pan PhD is a Professor of VR at Goldsmiths, University of London. She co-leads the Goldsmiths Social, Empathic, and Embodied VR lab (SeeVR Lab) and the MA/MSc in Virtual and Augmented Reality programme at Goldsmiths Computing.

Stefanie Zollmann
Stefanie Zollmann
University of Otago
(New Zealand)

Title: Artificially eXtended Reality: Combining XR and AI to create new interfaces for challenging environments

Abstract: eXtended Reality interfaces have the potential to enrich the capabilities of humans. These interfaces allow us to seamlessly integrate information into daily tasks and have the potential to overcome the disconnection between information and the actual physical environment around a user where they need to access information. At the same time, the field of Artificial Intelligence aims to reduce the gap between machines and humans by developing systems that are capable of performing human tasks. By combining both there arise plenty of opportunities for smarter eXtended Reality interfaces that can be used in challenging environments and that react to the environment of users and the context of users. In this talk, we will explore some of the opportunities that arise from combining XR and AI for creating new interfaces in large-scale environments and will discuss applications in the context of architecture, construction and engineering but also for sports spectating.

Bio: Stefanie Zollmann is an Associate Professor for Computer Science at the University of Otago in New Zealand. She is leading the Visual Computing Otago research group. Before starting at Otago in 2016, she worked as a senior developer at Animation Research Ltd on eXtended Reality visualization, Computer Graphics and Computer-Vision-based tracking technology for sports broadcasting. She also worked for Daimler and Graz University of Technology. Her main research is in the field of Visual Computing, which describes the combination of traditional Computer Graphics, Computer Vision, Machine Learning, Visualization and Human-Computer-Interaction. Her research focus is on eXtended Reality (XR) for sports and media, visualization techniques for Augmented Reality and novel methods for capturing content for immersive experiences. Stefanie serves on the Editorial Boards of Transaction on Visualization and Graphics (TVCG) and Computers & Graphics and was Science and Technology program chair for ISMAR 2019 and 2020.

Wan-Chun Alex Ma
Wan-Chun Alex Ma
ByteDance/TikTok
(USA)

Title: Virtual Humans: From Entertainment to Social Media to Metaverse and Beyond

Abstract: We have witnessed an ever growing trend of using virtual humans over the last two decades: realistic CGI actors for storytelling in movies and video games, bringing back idols who passed away back to commercials and concerts, and a lot more. It is no longer an impossible task for computer algorithms to generate believable virtual human imagery. How are virtual humans made possible? In this talk, I will share my experiences regarding virtual human production and how its technologies impact applications other than entertainment business. We will first start from the key components behind virtual humans such as facial scanning and tracking, to new adventures on using machine learning for avatar applications for social media. We will also discuss in what scenarios virtual human technologies can make impacts for the forthcoming metaverse.

Keynotes (workshops)

Johan Jeuring
Johan Jeuring
Utrecht University
(NL)

Title: The impact of AI on our teaching

Abstract: Many Computer Science textbooks, tests, and assignments formulate tasks in a number of sentences, possibly with some examples. The OpenAI API Codex, GitHub Copilot and other, similar technologies provide the ability to generate code from such texts. For relatively simple tasks, these techniques already perform surprisingly well.
These developments may have a revolutionary effect on programming education. One aspect is that students can now solve a task by asking Copilot for the answer. Another aspect is that this may lead to a different way of programming, in which simple pieces of code are written by AI, and the focus in programming becomes more on problems at a higher level, which have to be broken down into smaller problems which can be further handled by the AI.
In this talk I will introduce Github Copilot, and I will discuss how I think it will impact our teaching. Furthermore, I will briefly introduce some of the other activities in our projects around AI in Education.

Shao-Yi Chien
Shao-Yi Chien
National Taiwan University
(Taiwan)

Title: AI Based Eye Tracking as the Next User Interface for the Masses

Abstract: Eye tracking is a technology that can know users’ attention. In this talk, we will first explain why eye tracking will become the core user interface in the metaverse era and will be built-in all AR/VR devices. We will also show the achievements of Ganzin Technology, an eye tracking solution provider. The new generation eye tracking system powered by AI can remove the barrier of adopting eye tracking in compact consumer devices for broad applications, which will unblock the potential of eyes as the ultimate interface to the digital world.

Bio: Shao-Yi Chien received the Ph.D. degree from the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, in 2003. In 2004, he joined the Graduate Institute of Electronics Engineering and Department of Electrical Engineering, National Taiwan University, as an Assistant Professor. Since 2012, he has been a Professor. Prof. Chien served as the Chair of IEEE Circuits and Systems Society Multimedia Systems and Applications Technical Committee in 2017—2019. Dr. Chien is an expert in AR/VR, eye tracking, computer vision, real-time image/video processing, video coding, computer graphics, and system-on-a-chip design. He has published more than 300 papers and granted more than 40 patents. Since 2018, he is the founder and CEO of Ganzin Technology, Inc., an eye tracking solution provider for AR/VR/smart-glasses, which is the most easy-to-integrate solution on the market.


Poster sessions

Posters will be up at all time. Days at which authors will be present at their poster are indicated in brackets.

List of poster papers List of demos List of late-breaking-work papers

OpenLab satellite event

The following videos from the OpenLab pre-conference satellite event on Dec 3-4 in Taiwan can be accessed at a dedicated room during the conference.



< Go back to home page