Smart glove takes hold of touch technology
A “smart glove” that can register how we grip objects has been developed by scientists in the US. It’s an ordinary yellow glove fitted with a pressure-sensitive material, and they hope it will help to build better prosthetics, as well as touch-sensitive arms for robotics. Phil Sansom heard how from inventor Subramanian Sundaram.
Subramanian - It's sort of been a long standing quest for the field of robotics to understand how humans grasp objects. There's never been a quantitative way to fully understand the tactile signals involved in the grasp until now.
Phil - Why not?
Subramanian - It's because it's been very hard to build a sensor or a sleeve that goes on top of your hand without actually interfering with the way you hold objects itself.
Phil - What's it made of?
Subramanian - The smart glove is a sleeve that you can wear on top of your hand that records pressure at many different points. Not just where your hand is touching, but also the intensity with which its touching an object. So the heart of the glove is basically a force sensitive material that responds to force by a change in the electrical resistance. The first major challenge was to figure out how do you route these wires in between fingers? And the second challenge was how do you effectively interface with these electrodes, so a lot of the electrodes in the glove are made using conductive threads.
Phil - And it is something that I could make at home if I really wanted a smart glove DIY?
Subramanian - Yeah, absolutely. You can make one set for about 10 American dollars.
Phil - That's bizarre that you had this thing that no one could figure out how quite to manufacture and you've managed to create it by buying things on eBay?
Subramanian - That's exactly right.
Phil - I know then that you went on to do some pretty cool science with it, what exactly did you do?
Subramanian - We took this glove and we had a user wear the glove, and the user then interacted with about 26 different objects. Holding these objects, grasping them, lifting them, objects like mugs and a ball. One of the objects was a ball, we had a pen, we had a spoon, we had a stone sculpture of a cat, and these are often objects that I just found on my desk before I started the project actually. In the end we were left with around five hours of continuous interactions with these different objects, so by the end we had over 135,000 tactile frames or pressure maps as we try to move these objects.
Phil - Oh, hang on. Just to clarify, all of those points across the hand creates a map and you had 135,000 of those?
Subramanian - Exactly, that's perfectly right. So we then took these set of tactile frames that we collected and split them into two separate batches. One of them was used as training data and one of them was used for testing. The training data was fed onto a machine learning algorithm, so we essentially told the algorithm this tactile pattern corresponds to a user holding a ball, for instance. So we were then able to train the machine learning algorithm to associate these patterns with particular objects. We then fed in the test data that we had, data that our network has never seen before and then we asked the network to predict what objects the user was interacting with. And we were able to show that the network was indeed able to identify these objects from these tactile pressure maps.
Phil - Really. it could take a look at just what the hand was touching and figure out what that object was?
Subramanian - Yeah, that's exactly right.
Phil - Does this all mean that that's sort of a clue to how our brains figure out what we’re holding?
Subramanian - Yeah. And I think in terms of neuroscience research it has been shown that there's a lot of similarities in the way the brain processes information from sort of these tactile domain and the way the brain looks at visual images.
Phil - Does this have any implications for the way we design maybe robot hands in the future?
Subramanian - Yeah, absolutely. And this is one of the most exciting aspects of the work. I think one of the key results that we showed in the paper was sort of the collaborative nature of the human grasp. Different regions of the hand come together to perform a grasp. Say you were a prosthetic designer, you can use the data that we collected to effectively pinpoint where these few sets of sensors that you have, what are the most efficient places to put them?