For the video games fanatics, transporting yourself into a video game, body and all, just got easier.
Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies.
However, all this typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.
The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides.
Based on machine learning techniques it roughly estimates the 3D body shape and location of joints.
In the second stage, it combines the info into a virtual human created from each frame, making it a more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.