The International Workshop on Autonomous Virtual Humans and Social Robot for Tele-presence (SG_AVH 2014) will be held in conjunction with SIGGRAPH ASIA in Shenzhen, China on December 3, 2014.
More and more, people communicate remotely using new technical innovations, teleconferencing and voip like Skype. It allows people to attend meetings or together gathering anywhere they are. They can even share the same Virtual Space using 3D avatars like in Second Life. More sophisticated systems are developed with 3D capturing and rendering leading to a true 3D TelePresence experience. If the 3D avatar is distantly guided by the real participant, it is possible to replace this real participant by its autonomous virtual or artificial counterpart. It is supposed to give a partial illusion that the real person is present. This implies that it should look the same as the real human, speaks with the same intonation, and be somehow aware of the real situation, the real participants, and the task currently performed. It should react at the right time based on the perception he/she has from the real participants. Autonomous social robots and autonomous virtual humans have a lot in common, but, the main difference is that the social robots exist physically while autonomous Virtual Humans are software-based visual agents. In terms of research, the problems to solve for both social robots and virtual humans are very similar. Modeling memory, decision process, behavior according to the situation are the big challenges in the field of autonomy. We generally consider autonomy as the quality or state of being self-governing. Perception of the elements in the environment is essential, as it gives the social robot or the virtual human the awareness of what is changing around it. It is indeed the most important element that one should simulate before going further. Most common perceptions include (but are not limited to) visual and auditory feedback. Adaptation and intelligence then define how the social robot or the virtual human is capable of reasoning about what it perceives, especially when unpredictable events happen. When predictable elements are showing up again, it is necessary to have a memory capability, so that similar behavior can be selected again. Lastly, emotion instantaneously adds realism by defining affective relationships between characters.
This workshop will provide a great opportunity for participants to interact with leading experts, share their own work, and educate themselves through exposure to the research of their peers from around the world. The workshop will bring to the SIGGRAPH community the last developments in this up-to-date area of research.
Presenter(s)
A Neurobehavioural Framework for Autonomous Animation of Virtual Human Faces
Mark Sagar, David Bullivant, Paul Robertson, Oleg Efimov, Khurram Jawed, Ratheesh Kalarot, Tim Wu
Activity Recognition in Unconstrained RGB-D Video using 3D Trajectories
Yang Xiao, Gangqiang Zhao, Junsong Yuan, Daniel Thalmann
Coexistent Space: Toward Seamless Integration of Real, Virtual, and Remote Worlds for 4D+Interpersonal Interaction and Collaboration
Bum-Jae You, Jounghuem R. Kwon, Sang-Hun Nam, Jung-Jea Lee, Kwang-Kyu Lee, Kiwon Yeom
From Fiber to Fabric: Interactive Clothing for Virtual Humans
George Baciu, Wingo Sai-Keung Wong
On Designing Migrating Agents: From Autonomous Virtual Agents to Intelligent Robotic Systems
Kaveh Hassani, Won-Sook Lee
Polynormal Fisher Vector for Activity Recognition from Depth Sequences
Xiaodong Yang, YingLi Tian
Tracking and Fusion for Multiparty Interaction with A Virtual Character and A Social Robot
Zerrin Yumak, Jianfeng Ren, Nadia Magnenat Thalmann, Junsong Yuan
International Workshop on Autonomous Virtual Humans and Social Robot for Telepresence Workshop Program Introduction Daniel Thalmann |
09:00 - 09:15 |
Coexistent Space: Toward Seamless Integration of Real, Virtual, and Remote Worlds for 4D+ Interpersonal Interaction and Collaboration Invited Speaker |
09:15 - 10:00 |
A Neurobehavioural Framework for Autonomous Animation of Virtual Human Faces Venera Adanova |
10:00 - 10:30 |
Coffee Break |
10:30 - 11:00 |
Modelling Awareness and Social Behaviour of Virtual Humans and Social Robots Invited Speaker |
11:30 - 12:15 |
Activity Recognition in Unconstrained RGB-D Video using 3D Trajectories Yang Xiao |
12:15 - 13:45 |
From Fiber to Fabric: Interactive Clothing for Virtual Humans Invited speaker |
13:45 - 14:30 |
Polynormal Fisher Vector for Activity Recognition from Depth Sequences Xiaodong Yang |
14:30 - 15:00 |
On Designing Migrating Agents: From Autonomous Virtual Agents to Intelligent Robotic Systems Kaveh Hassani |
15:00 - 15:30 |
Coffee Break |
15:30 - 16:00 |
OPanel: Can we consider Virtual Humans and Social Robots our possible partners? Panel ChairNadia Magnenat Thalmann |
16:00 - 16:50 |
Closing Session |
16:50 - 17:00 |