Doctoral students at Purdue University have recently developed a new and an amazing way to represent user’s hand movements in Virtual Reality. And they named this project as DeepHand which uses ‘Conventional neural network’ which means it imitates the human brain and in the same way, it is skilled to have deep learning to understand the endless movement of joint angles and flexibility. And it will be presented at CVPR 2016, a computer vision conference from 26th June to July 1 in Las Vegas.
DeepHand is made possible with a depth sense camera to capture hand movements and later on, interpret the hand movements with specialized algorithms. The researchers have filled the database’s bag with 2.5 million hand movements from where the DeepHand selects the perfect one that fits the best with the camera captured image. It identifies the changes in the key angles present in the hand and the configurations of these key angles are represented by the set of numbers.
It is quite similar to the Netflix algorithm, which recommends movie to the user based on his previous purchases. A lot of work is being done on the hand movements in the previous times like, the leap motion also took steps towards the hand gestures and we hope this experiment will reach new heights and will work as a revolution in the field of VR.