|Place||Zoom(link will be provided after registration)|
-Computer Vision for AR, Hideo Saito (TA: Woojin Cho)
-Prototyping and Evaluation for AR, Mark Billinghurst (TA: Hyung-il Kim)
-Deep learning for AR, Vincent Lepetit (TA: Ikbeom Jeon)
|Contact||(Mail: email@example.com )|
|09:00-09:50||Image Processing||Camera model||Registration||Multiple view geometry*||Virtualized Reality|
|10:00-10:50||Depth camera||ICP algorithm||Fundamental Matrix*||Visual hull reconstruction*|
|11:00-11:50||Image Processing exercise||Depth camera processing exercise||ICP algorithm exercise||View Morphing Exercise||Volumetric reconstruction Exercise*|
|Prototyping and Evaluation for AR
|13:00-13:50||Introduction to XR Design||Low fidelity prototyping||Development tools||Introduction to XR Evaluation||Analysing Results|
|14:00-14:50||Overview of XR Prototyping||Prototyping interactions||User interface guidelines||Evaluation Methods||Research Directions|
|15:00-15:50||Low fidelity prototyping exercise||Interactive prototyping example||High fidelity prototyping exercise||Evaluation Exercise||Prototype Presentation|
|16:00-18:00||Introduction to Deep Learning. Depth prediction from monocular images.||3D object and hand pose estimation and 3D scene understanding.||3D model prediction; Transformers for 3D vision.||Point cloud analysis||[16:00-18:30]
-All lectures will be in english.(모든 강의는 영어로 진행합니다.)
-Lectures with asterisk symbol* will be provided as video streaming.(별표* 표시된 강의는 동영상 스트리밍으로 진행합니다.)
In this lecture series, I would like to take basic topics on computer vision. In the end of every day’s lecture, an exercise will be given to understand the day’s lecture topic. Then, in the second week, I will give a tutorial on the technology called as Diminished Reality, in which objects in the captured scene are virtually erased by replacing with the background texture. As the final exercise, a system for Diminished Realty should be designed and implemented.
For exercise, I would recommend each participant to use Python with OpenCV and Open3D, and other related libraries.
Augmented Reality (AR) has been researched and developed for nearly 60 years, but it is only recently that the technology has become more readily available. However, until recently, creating AR applications required strong programming skills. This is often an obstacle to people wanting to create novel and intuitive AR user experiences.
In this seminar, participants will learn how to use a wide array of non-programming tools for rapid prototyping of AR experiences and them to evaluate these experiences. These will range from physical prototyping tools including paper templates for sketching out AR experiences, to web-based drag-and-drop applications with rapid previews on AR devices, to immersive authoring tools which can be used for creating 3D interface mockups, and others.
The seminar will also review how to design user studies to evaluate the prototypes. This will include AR experiment design, qualitative and quantitative evaluation methods, and how to perform data analysis.
Prof. Mark Billinghurst has a wealth of knowledge and expertise in human-computer interface technology, particularly in the area of Augmented Reality (the overlay of three-dimensional images on the real world).
In 2002, the former HIT Lab US Research Associate completed his PhD in Electrical Engineering, at the University of Washington, under the supervision of Professor Thomas Furness III and Professor Linda Shapiro. As part of the research for his thesis titled Shared Space: Exploration in Collaborative Augmented Reality, Dr Billinghurst invented the Magic Book – an animated children’s book that comes to life when viewed through the lightweight head-mounted display (HMD).
Not surprisingly, Dr Billinghurst has achieved several accolades in recent years for his contribution to Human Interface Technology research. He was awarded a Discover Magazine Award in 2001, for Entertainment for creating the Magic Book technology. He was selected as one of eight leading New Zealand innovators and entrepreneurs to be showcased at the Carter Holt Harvey New Zealand Innovation Pavilion at the America’s Cup Village from November 2002 until March 2003. In 2004 he was nominated for a prestigious World Technology Network (WTN) World Technology Award in the education category and in 2005 he was appointed to the New Zealand Government’s Growth and Innovation Advisory Board.
Originally educated in New Zealand, Dr Billinghurst is a two-time graduate of Waikato University where he completed a BCMS (Bachelor of Computing and Mathematical Science)(first class honours) in 1990 and a Master of Philosophy (Applied Mathematics & Physics) in 1992.
This class will review the main works in the state-of-the-art in Deep Learning with application to Augmented Reality, in particular for 3D perception of geometry and semantic. We will discuss methods for predicting depth from monocular views, 3D pose estimation of objects and hands, Neural Radiance Fields (NeRFs), 3D shape prediction, point cloud analysis, and camera pose estimation.
I am a director of research at ENPC ParisTech, France. I also supervise a research group at TU Graz, Austria. Before that, I was a full professor at the Institute for Computer Graphics and Vision, Graz University of Technology, and before that, a senior researcher at CVLab, EPFL, Switzerland.
My research focuses on 3D scene understanding. More exactly, I aim at reducing as much as possible the guidance a system needs to learn new 3D objects and new 3D environments: How can we remove the need for training data for each new 3D problem? Currently, even self-supervised methods often require CAD models, which are not necessarily available for any type of object. This question has both theoretical implications and practical applications, as the need for training data, even synthetic, is often a deal breaker for non-academic problems.