Forging a voice or editing a video with participants who were not in the frame is de facto the last century. The University of Berkeley unveiled a pilot technology that can get anyone to dance a free dance. On the computer monitor, of course, instructing the character model to perform various complex movements.
The system uses the "stick man" model to digitize the behavior of a real person and learn how to build various patterns that mimic his movements. This is not at all easy, artificial intelligence needs source material in the form of a video recording with a duration of 20 minutes or more. at 120 frames per second or more. Next, we find a video with the performance of the desired dance, translate it into a stick model and transmit the dance script to the neural network, which will render and "pull" the external attributes of the target onto the model.
If there are no problems with repetition of movements, then to give realism it was necessary to use another neural network, a "censor", which reveals too obvious violations. The stick model excludes the possibility of taking into account the movements of free fabric, so you will have to dance only in tight clothes. And if most of the dance steps obscure each other, the AI may have a problem recognizing them against the background of the dancer's body. The exact transmission of emotions is also out of the question.
In general, the technology looks extremely crude and even funny, if we talk about a real fake dance videos. On the other hand, it takes several working days to falsify such a difficult video using the old methods. Here - one AI and a few hours of work. And while the technology is unlikely to be suitable for forging evidence, it will find application in animation and cinema.