The "arms race" of virtual reality controllers is in full swing. Recently, a team from Purdue University joined the "race" to develop the DeepHand hand tracking system. The system uses a depth-scanning camera and deep learning technology to correctly position the hand in the virtual world.
The camera reads the position and angle of rotation of the hand at various points and projects it onto the display using a special algorithm that quickly scans the database, choosing the most suitable option from more than 2.5 million positions.
To maximize the display of virtual movement, the program can predict possible hand states. The algorithm is also able to find out and display the positions of the fingers and hand that are invisible to the camera.
In the process of creating DeepHand, the researchers had to start with gesture recognition, gradually filling the database with them, which required a rather powerful processor. Once all the "rough" work is completed, the system can be adapted to a regular computer.