How soundscapes and vibrations are helping blind people see the world
Glasses that translate images of physical objects into soundscapes and a belt that turns images into vibrations are helping blind people build up a real-time 3D picture of the world around them, and the technology could hit the market as soon as next year.
According to the World Health Organization, there are about 285 million visually impaired people around the world, of whom 39 million are blind. Advances in technology and medical science may never restore perfect sight to the millions with very poor vision or unable to see at all.
But a wearable gadget resembling glasses uses cameras and a compact processing unit to create 3D images on the fly and feeds the information back to the wearer as intuitive soundscapes.
‘The cameras produce a three-dimensional picture of the surroundings in real time and the system translates this into sound, something like the sound of the sea, which the user learns to interpret to navigate their environment,’ said Antonio Quesada, chief executive of Eyesynth, based in Castellon, Spain.
The specially developed headset, developed with the support of EU funding, constructs ‘audio pictures’ of the wearer’s surroundings with the aim of improving everyday interactions and increasing the independence of blind people and those with very poor vision.
Crucially, it was designed with style in mind. Using carefully designed eyewear, discreet cameras, and a processor about the size of a mobile phone, Eyesynth aims to overcome users’ resistance to ungainly or unattractive medical equipment.
‘We aimed to make the system as beautiful as possible, so it can be stylish, and more than just a gadget,’ Quesada said.
Neural scanning shows that even when only abstract sounds are used, the brain engages the visual cortex to build up an audio image.
By learning the audio language, users can make their way around obstacles or identify and grasp nearby items, such as a water bottle on a restaurant table. ‘By learning to understand the subtle variations in the sounds, the user can identify straight lines, or rounder shapes,’ Quesada said.
One of the special features of Eyesynth is that the audio signals are not transmitted through the outer ear, but conducted through bones on the side of the head allowing the user to hear what is going on around them. This method also provides benefits for people with poor hearing.
Users learn Eyesynth's audio language in a familiar environment, so they rapidly get used to understanding the sounds associated with known shapes and positions. And after a week of training, blind testers are able to distinguish small objects on a table.
‘Since the use of the white cane and guide dog, there hasnt been a technological mobility solution for the blind and visually impaired,’ Quesada said.
Eyesynth has already been patented in Spain and Quesada expects the product to be on the market next year, with the team in the process of closing distribution agreements with two leading eyewear chains in Spain.
The system is also being further developed to recognise faces, read text and identify colours.
In Iceland, researchers are also using 3D-camera systems to create a picture for blind people but they’re complementing it with a vibrating belt that uses the sense of touch, also known as haptics, to produce a novel form of visualisation.
The latest prototype belt in the EU-funded Sound of Vision project fits around the user’s mid-section and uses a matrix of motors that vibrate gently against the stomach. This provides an alternative, tactile way to represent the scene picked up by headset cameras.
Project coordinator Runar Unnthorsson, professor of industrial engineering at the University of Iceland, said the belt could make a simple shadow-like representation of the object being viewed.
‘If there is a lamppost in front of you, for example, as you rotate, you would feel the centre column moving along the belt,’ Prof. Unnthorsson said. ‘In a way you could think of it as a low-resolution vibrating image.’
The belt offers wide scope for conveying information and images to the user. It could even produce animations by switching neighbouring motors on and off in sequence to create a sensation of movement.
‘In this way, we can make the user feel like someone is drawing on their stomach,’ Prof. Unnthorsson said.
The Sound of Vision system is able to scan and read text, helping people to read signs in challenging situations such as airports, and can identify the best path through a series of indoor or outdoor obstacles.
Prof. Unnthorsson said they tested several ways of converting visual information into useful audio via the 3D cameras.
The latest prototype uses sounds simulating a stream of bubbles in water. A large object, for example, is represented by more bubbles, and a high object by lighter bubbles.
The system also has a danger mode to warn people if there are hazards such as stairs going down or a missing utility cover on the pavement in front of them.
‘The system is highly customisable, so users can switch between different audio modes, or tactile modes and even change parameters, such as the number of objects represented,’ Prof. Unnthorsson said.
While the prototype currently uses a laptop in a backpack for image processing, the ambition is to bring a smaller system to go onto the market next year.
By Rex Merrifield