< Go back to home page

Program


Wed, Jan 17, 2024


Thu, Jan 18, 2024


  • 18:00 Banquet / dinner @ Hilton hotel (18:00-21:30)

  • Fri, Jan 19, 2024




    IEEE AIVR 2024 keynotes

    Day 1: Randall Hill, Jr.: How We Built the Holodeck (minus the parts that break the laws of physics)

    Web, LinkedIn

    Abstract: When ICT opened its doors, in 1999, our official mandate was to use entertainment and game industry skills to make military training better, to explore emerging technologies with a vision to create synthetic and adaptive experiences that were so compelling that participants responded as if they were real. That was the official “ask” anyway. Unofficially, our Army sponsors told us our mission if we chose to accept it was to “Go Build the Holodeck!” And we have – apart from the parts that break the laws of physics. In his speech, Dr. Randall Hill, Executive Director, ICT will detail what we’ve built in the past 25 years, how it works, and what we’re creating next.

    Speaker bio: Dr. Randall W. Hill is the Executive Director of the USC Institute for Creative Technologies, Vice Dean of the Viterbi School of Engineering, and was recently named the Omar B. Milligan Professor in Computer Science (Games and Interactive Media). After graduating from the United States Military Academy at West Point, Randy served as a commissioned officer in the U.S. Army, with assignments in field artillery and military intelligence, before earning his Masters and PhD in Computer Science from USC. He worked at NASA JPL in the Deep Space Network Advanced Technology Program, before joining the USC Information Sciences Institute to pursue models of human behavior and decision-making for real-time simulation environments. This research brought him to ICT as a Senior Scientist in 2000, and he became promoted to Executive Director in 2006. Dr. Hill is a member of the American Association for Artificial Intelligence and has authored over 80 technical publications.

    Day 2: Georgia Gkioxari: From Pixels to Percepts - The Evolution of Machine Vision through AI

    Web, LinkedIn

    Abstract: The advances in AI, exemplified by breakthroughs like ChatGPT, are extraordinary. These developments are the culmination of extensive research, a journey centered around developing models that can interpret the world, be it through language or imagery. In my talk, I will highlight the key technological milestones that have driven progress in visual perception, particularly the ability to associate images with objects of scenes. I will also discuss the continuous efforts to lift perception into the third dimension which aims to perceive objects in 3D, integrating insights from machine learning and computer graphics.

    Speaker bio: Georgia is an Assistant Professor at the Computing + Mathematical Sciences at Caltech. She obtained her PhD in Electrical Engineering and Computer Science from UC Berkeley, where she was advised by Jitendra Malik. Prior to Berkeley, she earned her diploma from the National Technical University of Athens in Greece. After earning her PhD, she was a research scientist at Meta AI. In 2021, she received the PAMI Young Researcher Award, which recognizes a young researcher for their distinguished research contribution to computer vision. She is the recipient of the PAMI Mark Everingham Award for the open-source software suite Detectron. In 2017, Georgia and her co-authors received the Marr Prize for “Mask R-CNN” published and presented at ICCV. She was named one of 30 influential women advancing AI in 2019 by ReWork and was nominated for the Women in AI Awards in 2020 by VentureBeat.

    Georgia’s research focuses on machine vision, namely teaching machines to see. Our world is complex, it is three dimensional and it is dynamic. Computational models get to observe this world from imagery but only partially as visual data does not completely capture the richness of the world we live in. The goal of Georgia’s work is to design visual perception models that bridge the gap between 2D imagery and our 4D world.

    Day 3: Chloe LeGendre: Remain in Light: Realistic Augmented Imagery in the AI Era

    Web, LinkedIn

    Abstract: Augmented Reality (AR) blends real-world scenery with computer and artist generated imagery to unlock novel, creative experiences. In this talk, I will describe advances towards crafting AR imagery that seamlessly blends the real and virtual together, with a focus on matching scene lighting. Given the rapid pace of recent developments in image generation models, I will also share my perspective on these generative models as applied to AR experiences.

    Speaker bio: Chloe LeGendre is a senior software engineer in Google Research, where she is currently working on computational photography and high dynamic range imaging. Her research typically applies machine learning to problems in computer graphics, photography, and imaging, with a special focus on scene lighting measurement and estimation, color science, and portrait photography manipulation. As a member of USC's Institute for Creative Technologies's Vision and Graphics Lab, Chloe graduated with a PhD in Computer Science in 2019 from the University of Southern California, where she was advised by Paul Debevec. Chloe has also worked in the R&D divisions of Netflix, L'Oréal USA, and Johnson & Johnson, focused on emerging technology development in the areas of virtual production, augmented reality, and imaging.

     


    < Go back to home page