The INVICTUS Project
INVICTUS (Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling) Project aims at delivering innovative authoring tools for the creation of a new generation of high- fidelity avatars (numerical representations of real humans) and the integration of these avatars in interactive and non- interactive narratives (movies, games, AR+VR immersive productions).
The INVICTUS project proposes to
INVICTUS proposes the design of three innovative authoring tools
1. High-Resolution Volumetric Capture
A tool to perform high-resolution volumetric captures of both appearance and motion of characters and that will enable their exploitation in both high-end off-line productions (film quality) and real-time rendering productions. This will ease high-fidelity content creation and reduce costs through less manual labour.
2. Edit High-Fidelity Volumetric Appearance
A tool to perform edits on high-fidelity volumetric appearances and motions, such as transferring shapes between characters, performing stylization of appearance, adapting and transferring motions. This will reduce manual labour and improve fidelity.
3. Story Authoring Tool
A story authoring tool that will build on VR interactive technologies to plunge storytellers in virtual representations of their stories to edit decors, layouts and animated characters, improving productivity and creativity.
The INVICTUS project will open opportunities in the EU market
By demonstrating and communicating on how these technologies can be immediately exploited in both traditional media (films/animation) and novel media (VR + AR) narratives, the INVICTUS project will open opportunities in the EU market for more compelling, immersive and personalized visual experiences, at the crossroads of film and game entertainment.
The first objective is to obtain compelling high-fidelity avatars by relying on recent advances in volumetric motion capture by partners HHI, IDCC and VOL.
The underlying challenges are to provide high-resolution, clean and exploitable representations of varying shapes over time, while also augmenting the representations with skeletal information, in a hybrid approach mixing volumetric and geometric information.
While computer graphics models of persons or computer-generated avatars can be easily animated using appropriate tracking sensors, they still suffer from the uncanny valley effect and heavy manual input is required to reach reasonable appearances.
In contrast, volumetric video, which generates dynamic 3D models of persons with their natural mimic and gestures, potentially avoids the uncanny valley effect (although we are not yet quite there) and present the benefit of capturing simultaneously the appearance and motion, hence simplifying the process and improving the realism at the same time. However, animation and editing of volumetric video is usually restricted to simple viewpoint changes.
As a result, the INVICTUS project will provide a comprehensive approach in capturing user motions and appearances with a high level of fidelity and enhance these data with skeleton based CG models by being the first to combine realistic volumetric measurements of appearance, geometry and motion with capabilities for editing and animation (WP1)
The second objective is to design powerful editing tools on avatars represented with volumetric data, that will reduce the amount of manual input required to adapt avatars to the application context, i.e. adapting the information extracted from the real world such as users motions and appearances to the specific context of the XR experience, for example by altering the users morphology and animation to the character they are impersonating, as well as altering their appearance.
In turn this requires to specifically design (i) facial authoring tools (given their importance in avoiding canny appearances) to improve the modelling, animation and transfer of facial expressions, (ii) motion adaptation tools that will provide means to adapt given recorded volumetric animations to specificities of the XR experience, such as adapting the motion of an avatarto the morphology of another one.
As a result, the INVICTUS project will provide facial and body editing tools to ensure the proper contextualization of the captured information, by enabling the adaptation, transformation and retargeting of avatar shapes and avatar motions (WP2).
The objective is to create novel means, based on XR technologies, that will assist collaborative content creation for linear and non-linear narratives, exploiting the full potential of volumetric captured avatars.
The founding idea is to rely on VR interactive technologies to author these animated 3D contents, building on the immersion offered by VR together with the capacity to manipulate contents using 6D input devices both in space and in time in intuitive ways.
We will first focus on the design of collaborative layout manipulators (playing backgrounds and props), staging manipulators (placing static avatars), cinematography (placing camera devices and camera rigs such as cranes and dollies) and lighting, by working with creatives from partner UFT in film, animations and games. We will then design novel manipulators to address the spatio-temporal control of avatars animations, both on traditional representations and volumetric representations.
In essence we fill a gap by designing accessible and dedicated content creation tools at the storyboard, previsualisation, and technical stages so that part of the creative power can be placed back in the hands of film creatives, and contents can be co-created and co-reviewed by creatives and 3D artists.
As a result, the INVICTUS project will provide a VR-based virtual production tool that eases content creation through manipulators on scene layouts, lighting, cinematography and volumetric avatar motions (WP3).
The innovation of the INVICTUS project is not just the enhancement of several parallel streams of XR related technical challenges.
In fact, one of the most challenging parts of the INVICTUS XR system is to take the individual tools and demonstrate how they work together to create more compelling, interactive and immersive XR experiences.
The INVICTUS project will, over the lifetime of the project, progressively build up more and more advanced features in the INVICTUS XR environment until we achieve the full integration of these challenging components in a seamless way that is entirely convincing to the users in the INVICTUS XR environment.
In turn, this requires the careful design of multiple use-cases driven by the requirements of our industrial partners (UFT, IDCC and VOL).
Our two use-cases, focussing respectively on 1) an augmented reality scenario in which high-fidelity avatars interact and adapt to users, and 2) a VR authoring tool with volumetric video avatars, will be used as dissemination flagships to demonstrate the strengths of our tools (adaptive, cost-effective, intuitive).
As a result, the INVICTUS will provide demonstrators of our technologies (WP4) and rely on a range of dissemination channels (specialized events, marketplaces, public demonstrations) to ensure the uptake of the technology (WP5).