AI pic (source: www.maxpixel.net)

2nd International Conference on

Artificial Intelligence & Virtual Reality (AIVR 2019)

San Diego, California, USA, December 9-11, 2019
Co-located with IEEE ISM 2019
Welcome
Important dates

Keynotes (in alphabethical order)

Jeremy Bailenson, Stanford Jeremy Bailenson, Stanford University, Stanford, California, USA
Title: Experience on Demand
Abstract: Virtual reality is able to effectively blur the line between reality and illusion, granting us access to any experience imaginable. But how does this new medium affect its users, and does it have a future beyond fantasy and escapism? There are dangers and many unknowns in using VR, but it also can help us hone our performance, recover from trauma, improve our learning and communication abilities, and enhance our empathic and imaginative capacities.
Bio: Jeremy Bailenson is founding director of Stanford University's Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford's Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Post-Doctoral Fellow and then an Assistant Research Professor.
Bailenson studies the psychology of Virtual and Augmented Reality, in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how virtual experiences can transform education, environmental conservation, empathy, and health. He is the recipient of the Dean's Award for Distinguished Teaching at Stanford.
Hrvoje Benko, Facebook Reality Labs Hrvoje Benko, Facebook Reality Labs, USA
Title: The Future of Mixed Reality Interactions
Abstract: The vision of always-on Mixed Reality interfaces that can be used in a continuous fashion for an entire day, depends on solving many difficult problems including display technology, comfort, computing power, batteries, localization, tracking, and spatial understanding. However, solving all those will not bring us to a truly useful experience unless we also solve the fundamental problem of how to effectively interact in Mixed Reality. I believe that the solution to the MR interaction problem requires that we combine the approaches from interaction design, perceptual science, and machine learning, to yield truly novel and effective MR input and interactions. Such interactions will need to be adaptive to the user context, believable, and computational in nature. We are at the exciting point in the technology development curve where there are still few universally accepted standards for MR input, which leaves a ton of opportunities for both researchers and practitioners.
Bio: Hrvoje Benko is a research science manager at Facebook Reality Labs where he leads a research group focusing on novel input devices, interaction techniques and adaptive interfaces in augmented and virtual reality. His interests span augmented and virtual reality, interactive projection mapping, haptics, new input form factors and devices, as well as touch and freehand gestural input. Hrvoje Benko is the co-author of more than 60 scientific papers and journal articles and holds more than 40 patents. Prior to his current role, he was a researcher at Microsoft Research. He served as General Chair (2014) and Program Chair (2012) of the ACM User Interface Software and Technology conference. For his publications he received several best paper awards at the top publication venues in his field, including ACM UIST, ACM SIGCHI, and ACM CSCW. He served as the Information Director and Associate Editor for ACM TOCHI journal, the premier journal in the Human-Computer Interaction field. More information: http://www.hrvojebenko.com
Pablo Cesar, CWI Pablo Cesar, Centrum Wiskunde & Informatica (CWI), Netherlands
Title: Social VR: Using Volumetric Video for Remote Communication and Collaboration
Abstract: With Social Virtual Reality emerging as a new medium where users can remotely experience immersive content with others, the vision of a true feeling of 'being there together' has become a realistic goal. This keynote will provide an overview about the challenges to achieve such a goal, based on results from practical case studies like the VR-Together project. We will discuss about different technologies, like point clouds, that can be used as the format for representing highly-realistic digital humans, and about metrics and protocols for quantifying the quality of experience. The final intention of the talk is to shed some light on social VR, as a new group of virtual reality experiences based on social photorealistic immersive content and to discuss about the challenges regarding production, technology, and user-centric processes.
Bio: Dr. Pablo Cesar leads Distributed and Interactive Systems group at CWI (The National Research Institute for Mathematics and Computer Science in the Netherlands). Pablo's research focuses on modeling and controlling complex collections of media objects (including real-time media and sensor data) that are distributed in time and space. Pablo has co-guided six PhD theses about QoE in multimedia systems, sensing technologies, 3D tele-immersion, accessibility, socially-aware multimedia, and distributed systems; and is currently co-guiding other three about multimedia systems and sensing technology. He is the PI from CWI on two H2020 projects about object-based broadcasting (2-IMMERSE) and 3D tele-immersion (VRTogether). Pablo participated as PI as well in very successful EU-funded projects like REVERIE (2011-2015) and Vconect (2011-2014). He has keynoted at venues such as the International Conference on Physiological Computing Systems (2017) and the Workshop on Educational and Knowledge Technologies (2017). Pablo has (co)-authored over 100 articles with several of his publications winning the best paper award: ACM TVX (2018), ACM MMsys (2016 and 2013), the International Conference on Physiological Computing Systems (2016), and WSICC (2013). He is member of the editorial board of, among others, IEEE Multimedia, ACM Transactions on Multimedia and IEEE Transactions of Multimedia. Pablo has given tutorials about multimedia systems in prestigious conferences such as ACM Multimedia, CHI, and the WWW conference. He acted as an invited expert at the European Commission's Future Media Internet Architecture Think Tank and participates in standardization activities at MPEG (point-cloud compression) and ITU (QoE for multi-party tele-meetings and immersive media). More information: https://homepages.cwi.nl/~garcia/
Paul Colnoval, Northrop Grumman Corp Paul Conoval, Northrop Grumman Corporation, Virginia, USA
Title: Integrating Virtual Reality Capabilities into Mission Critical Systems
Abstract: Large scale tactical and strategic mission systems are composed of a complex range of components and capabilities that are integrated across a variety of platforms and frameworks. Systems of the near and distant future will increasingly depend on autonomous operations, artificial intelligence, machine learning and other disruptive technologies that will impact the timing dynamics and decision-making responses required of critical real-time systems. Artificial intelligence and virtual/augmented reality have the potential for serving as force multipliers and providing new capabilities in system modeling, operations planning, immersive training environments and real-time human-machine teaming for meeting these challenges. Considering the multi-faceted dimensions of Northrop Grumman's mission objectives, an end-to-end systems-level design approach is required to leverage and integrate capabilities across the enterprise and achieve an integrated system-of-systems, at scale, in tactical and strategic environments. The presentation will demonstrate various examples in which artificial intelligence and virtual reality applies to Northrop Grumman systems, products and services. The discussion will also provide insights into practical considerations for utilizing virtual reality within mission-critical applications.
Bio: Paul Conoval is Director, Technology for Northrop Grumman's Mission Systems (MS) Sector. In this role he leads a wide range of advanced technology initiatives including the integration of the sector's R&D program, university research, intellectual property, innovation, and development of emerging technologies applied to Intelligence and DoD. His prior work includes development of national-scale intelligence, SIGINT and communications systems with emphasis in information theory, digital signal processing and electronics hardware design. Prior to 2009, Mr. Conoval served as Chief Technology Officer of TASC Inc. in Chantilly, VA, and also worked at MITRE Corporation and the Singer Corporation Kearfott Division developing digital avionics and communications systems. Mr. Conoval holds a BSEE from the Cooper Union School of Engineering and an MSEE from Rutgers University. Mr. Conoval is also an active practitioner of intellectual property and as a registered patent agent of the US Patent and Trademark Office.
Chris Dede, Harvard Chris Dede, Harvard University, Cambridge, Massachusetts, USA
Title: XR in Learning, Unlearning, Personalization, and Assessment
Abstract: Educators can use XR media in a variety of ways to help learners gain sophisticated knowledge and skills. This talk describes work in progress on a range of design-based research examples: learning a second language via immersion in culture and context, gaining knowledge and skills through immersive authentic simulations in ecosystem science and computational modeling, and fostering empathy and unlearning biases through immersive assessments. These use-cases suggest important next steps in AI and XR research.
Bio: Chris Dede is the Timothy E. Wirth Professor in Learning Technologies at Harvard's Graduate School of Education (HGSE). His fields of scholarship include emerging technologies, policy, and leadership. In 2007, he was honored by Harvard University as an outstanding teacher, and in 2011 he was named a Fellow of the American Educational Research Association. From 2014-2015, he was a Visiting Expert at the National Science Foundation Directorate of Education and Human Resources. His edited books include: Scaling Up Success: Lessons Learned from Technology-based Educational Improvement, Digital Teaching Platforms: Customizing Classroom Learning for Each Student, Teacher Learning in the Digital Age: Online Professional Development in STEM Education, Virtual, Augmented, and Mixed Realities in Education, and Learning engineering for online education: Theoretical contexts and design-based examples
Tobias Hollerer Tobias Höllerer, UC Santa Barbara, USA
Title: Learning for an Augmented Reality
Abstract: We are in a technological transition period. The last grand paradigm shift in terms of technology adoption, the mobile revolution, happened over a decade ago. Since then, several technologies have been competing to become the next big thing: Internet of Things, Physical Computing, Brain-Computer Interfaces, AI and Agent-based Computing, and of course Virtual and Augmented Reality. In this talk, we will make the argument that Augmented Reality in particular is well-positioned to play nice with any of these up-and-coming technologies and thus could indeed become the ubiquitous user interface that many investors and tech giants apparently trust it will be. It is also not hard to predict given its massive adoption and many success stories over the past decade, that Machine Learning, and particularly Deep Learning, will play a major role in any human-computer-interaction innovations to come, especially ones involving knowledge about the real, physical world. But the future that is outlined by these technological possibilities is not without problems. Machine Learning models can be hard to understand, fine-tune, control, and protect against attacks. More sensors (as beneficial for AR) mean more potential privacy intrusions. And people might rely more on automation than is good for them. What can we do as researchers to work towards a more humane technology outlook? Our take: give technology adopters more control, and let them, with technology, learn human skills that remain useful when technology ceases to be there for them.
Bio: Tobias Höllerer is Professor of Computer Science at the University of California, Santa Barbara, where he directs the Four Eyes Laboratory, conducting research in the four I's of Imaging, Interaction, and Innovative Interfaces. Dr. Höllerer holds a Diplom in informatics from the Technical University of Berlin as well as an MS and PhD in computer science from Columbia University. He is a recipient of the US National Science Foundation's CAREER award, for his work on "Anywhere Augmentation", enabling mobile computer users to place annotations in 3D space wherever they go. He was named an ACM Distinguished Scientist in 2013. Dr. Höllerer is co-author of a textbook on Augmented Reality, as well as over 200 peer-reviewed journal and conference publications in the areas of augmented and virtual reality, information visualization, intelligent user interfaces, 3D displays, mobile and wearable computing, and social computing. Several of these publications won Best Paper or Honorable Mention awards at such venues as the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE Virtual Reality, ACM Virtual Reality Software and Technology, ACM User Interface Software and Technology, ACM MobileHCI, IEEE SocialCom, and IEEE CogSIMA.
Vishy Swaminathan Vishy Swaminathan, Adobe, USA
Title: Enhancing Immersive Experiences with (contextual) user data
Abstract: It took about 20 years for video over the Internet to be delightful for end users. This was built on a large body of research from both academia and the industry. In the meantime, over the last few years, AI has provided generational transformations in video (content) understanding algorithms and in our ability to learn and adapt from big user (behavioral) data. How do we stand on the shoulders of these giants (transformations) to make next-gen immersive experiences compelling? I will take the audience through a journey on how to leverage these technological transformations to enhance end-user immersive experiences, from simple 360-degree VR videos or AR scenes to more complex volumetric videos or geo-located large-scale AR applications. Some demos of past and current projects will be shown with a call to leverage contextual user data to improve and personalize end-user immersive experiences.
Bio: Vishy (Viswanathan) Swaminathan is a Principal Scientist in Adobe Research working at the intersection of insights from behavioral data and multimedia content. His areas of research include next generation video and immersive experiences, improving content experiences with behavioral data, video streaming, and security. His research work has substantially influenced various technologies in Adobe's video delivery and security, advertisement, and recommendations products. Some of his research include the guts of Adobe's video recommendations, cloud DVR compression, and HTTP Dynamic Streaming which won the 'Best Streaming Innovation of 2011' Streaming Media Readers' Choice Award. Vishy recently received the 2017 Distinguished Alumni award from Utah State University's ECE Department. Prior to joining Adobe, Vishy was a senior researcher at Sun Microsystems Laboratories working on video servers, interactive video, next generation head-end and DRM technologies. Vishy has contributed to and edited multiple standards and received 3 certificates of appreciation from ISO for his contributions to MPEG Standards. Previously, he chaired multiple technical organizations including the Technical Committee of the Internet Streaming Media Alliance (ISMA) from its inception till 2004, Java Specification Request 158 on Java Stream Assembly API, and the MPEG-J ad hoc group. Vishy received his MS and Ph.D. in electrical engineering from Utah State University. He received his B.E degree from the College of Engineering, Guindy, Anna University, Chennai, India. Vishy has authored several papers, articles, RFCs, and book chapters, has about 50 issued patents, and has been invited to talk at multiple conferences. He volunteers in organizing a number of IEEE and ACM conferences, and most recently was the program chair for the 2017 Tech Summit at Adobe attended by 2800 of its brightest technical minds.

Sponsors

IEEE Computer Society

In cooperation

Eurographics Association

ACM

ACM SIGGRAPH

ACM SIGAI

ACM SIGCHI