Mixer is a Blender addon for real time collaboration in Blender. It allows multiple Blender users to work on the same scene at the same time and is developed by the R&D department of Ubisoft Animation Studio (former partner).
Mixer synchronizes in real time the modifications done to the scene and the objects it contains. During a collaboration session, Mixer displays the position of other participants and highlights their selections.
A series of videos presenting Mixer features is available on YouTube here and the software is available here GitHub.
Toolbox for volumetric motion and appearance avatarisation by HHI
The toolbox for volumetric motion and appearance avatarisation consists of several tools for the creation of realistic representations of human that can be edited and animated based on volumetric video.
3D Digital Human creation pipeline by IDCC
A photogrammetry pipeline to reconstruct 3D faces that consists of:
- A flexible capture rig with many cameras controlled and calibrated.
- A software pipeline to reconstruct the 3D geometrical model from the input pictures.
- A reference template character to fit the geometry to this reference topology.
- Dedicated models for specific parts of the face (hears, eyes, teeth, tong).
- A selection method to select hairs for the character and fit the hair cut to the scanned scalp.
Semantic telepresence application by IDCC
An alternative to current 2D vision communication applications (Teams, Google Meet…). It consists of a real-time demonstrator based on a deep learning semantic analysis and a deep generative reconstruction.
Camera-agnostic self-served volumetric video solution by VOL
With the help of the project, VOL was able to improve parts of their volumetric video pipeline so it is more camera agnostic and independent from the number of cameras used (foreground IP). The cloud technology used to process the volumetric video data was also improved, making the end-to-end process much faster and more cost efficient for the client.
Volumetric video capture for all by VOL
VOL developed Volu, an app that allows anyone with a mobile device to create volumetric video assets. Most recent mobile devices consist of an RGB and depth sensor. The underpinning technology of this app uses this information from both sensors to feed a deep-learning algorithm that predicts the volume of a person. This technology pushes the boundaries of what is feasible today on a mobile device, allowing everyone to create their digital twin.
API for volumetric video by VOL
VOL has already created a version of the app called Volu PRO that includes premium features for these professionals and is in the process of building a monetization strategy around it. Different volumetric video captures tools are made publicly available on VOL’s GitHub repository for developers to use for modifying their volumetric video assets and importing them into Unity and/or Unreal. The refined technology behind the multi-camera solution, Volu and Volu PRO will be made accessible through an API that will allow other businesses and clients to build their own product using these technologies.
VRTist by Ubisoft and UR1
A VR design tool dedicated the creation of 3D environments and animations for rapidly prototyping scenes and shots. It finds many applications in theater rehearsal, film previsualization, design of XR narratives, etc. VRTist is fully open-source and has been collaboratively designed with the help of the animation company Dada! Animation to ensure the implemented features and interactors fit the needs of production studios.