Publications

Publications

Preserving Memories of Contemporary Witnesses Using Volumetric Video

O. Schreer, M. Worchel, R. Diaz, S. Renault, W. Morgenstern, I. Feldmann, M. Zepp, A. Hilsmann, P. Eisert

ACM Conference Culture and Computer Science, Physical und Virtual Spaces

Publication year: 2021

Abstract

To be provided.

Share

Share on facebook
Share on twitter
Share on linkedin

Example-Based Facial Animation of Virtual Reality Avatars using Auto-Regressive Neural Networks

Wolfgang Paier, Anna Hilsmann, Peter Eisert

IEEE Computer Graphics and Applications

Publication year: 2021

Abstract

This paper presents a hybrid animation approach that combines example-based and neural animation methods to create a simple, yet powerful animation regime for human faces. Example-based methods usually employ a database of pre-recorded sequences that are concatenated or looped in order to synthesize novel animations. In contrast to this traditional example-based approach, we introduce a light-weight auto-regressive network to transform our animation-database into a parametric model. During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order. This representation is especially useful for the synthesis of visual speech, where co-articulation creates inter-dependencies between adjacent visemes, which affects their appearance. Instead of creating an exhaustive database that contains all viseme variants, we use our animation-network to predict the correct appearance. This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.

Share

Share on facebook
Share on twitter
Share on linkedin

Neural Face Models for Example-Based Visual Speech Synthesis

Wolfgang Paier, Anna Hilsmann, Peter Eisert

CVMP ’20: European Conference on Visual Media Production

Publication year: 2020

Abstract

Creating realistic animations of human faces with computer graphic models is still a challenging task. It is often solved either with tedious manual work or motion capture based techniques that require specialised and costly hardware. 

Example based animation approaches circumvent these problems by re-using captured data of real people. This data is split into short motion samples that can be looped or concatenated in order to create novel motion sequences. The obvious advantages of this approach are the simplicity of use and the high realism, since the data exhibits only real deformations. Rather than tuning weights of a complex face rig, the animation task is performed on a higher level by arranging typical motion samples in a way such that the desired facial performance is achieved. Two difficulties with example based approaches, however, are high memory requirements as well as the creation of artefact-free and realistic transitions between motion samples. We solve these problems by combining the realism and simplicity of example-based animations with the advantages of neural face models. 

Our neural face model is capable of synthesising high quality 3D face geometry and texture according to a compact latent parameter vector. This latent representation reduces memory requirements by a factor of 100 and helps creating seamless transitions between concatenated motion samples. In this paper, we present a marker-less approach for facial motion capture based on multi-view video. Based on the captured data, we learn a neural representation of facial expressions, which is used to seamlessly concatenate facial performances during the animation procedure. We demonstrate the effectiveness of our approach by synthesising mouthings for Swiss-German sign language based on viseme query sequences.

Share

Share on facebook
Share on twitter
Share on linkedin

Split Rendering for Mixed Reality: Interactive Volumetric Video in Action

J. Son, S. Gül, G. Singh Bhullar, G. Hege, W. Morgenstern, A. Hilsmann, T. Ebner, S. Bliedung,

P. Eisert, T. Schierl, T. Buchholz, C. Hellge

SIGGRAPH Asia, Demos

Publication year: 2020 

Abstract

This demo presents a mixed reality (MR) application that enables free-viewpoint rendering of interactive high-quality volumetric video (VV) content on Nreal Light MR glasses, web browsers via WebXR and Android devices via ARCore. The application uses a novel technique for animation of VV content of humans and a split rendering framework for real-time streaming of volumetric content over 5G edge-cloud servers. The presented interactive XR experience showcases photorealistic volumetric representations of two humans. As the user moves in the scene, one of the virtual humans follows the user with his head, conveying the impression of a true conversation.

Share

Share on facebook
Share on twitter
Share on linkedin

Ernst Grube: A Contemporary Witness and His Memories Preserved with Volumetric Video

M. Worchel, M. Zepp, W. Hu, O. Schreer, I. Feldmann, P. Eisert

Eurographics Workshop on Graphics and Cultural Heritage 8GCH2020

Publication year: 2020 

Abstract

''Ernst Grube - The Legacy'' is an immersive Virtual Reality documentary about the life of Ernst Grube, one of the last German Holocaust survivors. From interviews conducted inside a volumetric capture studio, dynamic full-body reconstructions of both, the contemporary witness and its interviewer, are recovered. The documentary places them in virtual recreations of historical sites and viewers experience the interviews with unconstrained motion. As a step towards the documentary's production, prior work presents reconstruction results for one interview. However, the quality is unsatisfying and does not meet the requirements of the historical context. In this paper, we take the next step and revise the used volumetric reconstruction pipeline. We show that our improvements to depth estimation and a new depth map fusion method lead to a more robust reconstruction process and that our revised pipeline produces high-quality volumetric assets. By integrating one of our assets into a virtual scene, we provide a first impression of the documentary's look and the convincing appearance of protagonists in the virtual environment.

Share

Share on facebook
Share on twitter
Share on linkedin

The impact of stylization on face recognition

N. Olivier, L. Hoyet, F. Argelaguet, F. Danieau, Q. Avril, A. Lecuyer, P. Guillotel, F. Multon,

SAP 2020 – ACM Symposium on Applied Perception

Publication year: 2020 

Abstract

While digital humans are key aspects of the rapidly evolving areas of virtual reality, gaming, and online communications, many applications would benefit from using digital personalized (stylized) representations of users, as they were shown to highly increase immersion, presence and emotional response. In particular, depending on the target application, one may want to look like a dwarf or an elf in a heroic fantasy world, or like an alien on another planet, in accordance with the style of the narrative. While creating such virtual replicas requires stylization of the user’s features onto the virtual character, no formal study has however been conducted to assess the ability to recognize stylized characters. In this paper, we present a perceptual study investigating the effect of the degree of stylization on the ability to recognize an actor, and the subjective acceptability of stylizations. Results show that recognition rates decrease when the degree of stylization increases, while acceptability of the stylization increases. These results provide recommendations to achieve good compromises between stylization and recognition, and pave the way to new stylization methods providing a tradeoff between stylization and recognition of the actor.

Share

Share on facebook
Share on twitter
Share on linkedin