How the brain processes images

How does the brain interpret the light coming into the eye so that we can perceive images?
10 April 2018

Interview with 

Dr Hugh Matthews, Cambridge University

Share

Once light information which has entered via the eye has been converted to electrical signals, how does the brain use this information perceive the world around us? Georgia Mills spoke to Hugh Matthews from the University of Cambridge, asking first of all how the brain goes about processing the vast quantities of visual information out there.

Hugh - The first step to dealing with the information overload is we’re not actually sampling the visual image with equal clarity in all of its parts. There’s a central region of the retina where the photoreceptors are packed very tightly together. These very tightly packed cones in the central region called the fovea give you a very detailed picture of just that little region, but the further out towards the periphery of the retina you move, the less densely sampled is the visual world and we’re scanning this little resolution patch back and forth over objects of interest all the time, so that’s the first solution.

The second is, even within the retina itself, the visual system is retaining only those aspects of the visual image that are important to it, especially information about boundaries, about changes in the image. It’s throwing information away information about regions of the image that don’t change so it only keeps what it needs for further analysis.

Georgia - So when I’m looking at a scene, I might not actually be processing all of that information; my brain is filling in some of the gaps?

Hugh - A lot of filling in of gaps. It gives you the illusion that your perceiving everything with equal detail all the time.

By the time the signal is sent through to the optic nerve, to the brain, it first needs to be converted into a sort of digital form. All this processing within the retina has happened as analogue continuous signals, but to travel along the optic nerve towards the brain and, therefore, travel a long distance we’ve got to convert it from a graded continuous signal to a digital like signal of individual nerve impulses or spikes.

These are then sent onwards via a sort of way-station on the way to the brain where they’re refined a little further, to the first part of the cortex, the brain itself that processes the visual information in detail, the so-called primary visual cortex. There’s been a little sorting out on the way there of the individual nerve fibres because each eye receives information from the whole of the visual scene. And what we want to do is sort the information from one side of the visual world from each each eye’s view so that all of that information from both eyes goes to the other side of the brain. The right hand side of the brain is receiving information about the left hand side of the visual world and visa versa.

Georgia - So they’re swapping over?

Hugh - A sort of partial unscrambling of the signal, so that all of the information representing one half of visual space ends up on the other side of the brain so that we can ultimately compare what each eye sees so as the judge the relative distances of objects away from us by comparing two eye’s views.

Georgia - Oh I see, so we don’t have to look at two images all the time as well, we can see just one image in front of us?

Hugh - Exactly.They get unified together into one at the level of the primary visual cortex. Also any differences between the two eyes used can get analysed so that we can decide whether an object is closer to us or further away from us than the point at which we’re actually looking at that moment.

Georgia - How does all the information that’s coming at us like the colour and the movement, how does that get coded by the brain into something we can understand?

Hugh - Well, an important principle for processing in the brain is parallel processing of different streams of information. Within the primary visual cortex, the spatial structure of the image, the presence of edges, and little segments of lines forming the boundaries between objects and their backgrounds, those are analysed as part of one stream. The colours are analysed by separate cells as part of another stream, and motion is analysed as a third processing stream. Ultimately we’re going to have to bring all of those together so that we can actually perceive what the system has seen.

Georgia - Oh right, so different parts of the brain divvy up the work as it were?

Hugh - Very much so. If you move to higher levels, you can find visual areas that are specialists in colour processing, specialists in motion processing, and specialists in the spatial structure of the image separating out objects from their backgrounds. You can even get some quite elaborate stimulus preferences: there is a little region of the brain that is believed to be principally looking for faces, familiar faces in particular. If you were to damage any of these specific areas, you’d lose the ability to carry out that particular type of visual perception; for example you can lose at a cortical level the ability to perceive colours. You can use the ability through damage to another region to perceive motion. And, most strangely of all, you can lose the ability to perceive faces, both familiar and unfamiliar. You may not even be aware that they are faces.

Comments

all of it’s parts > all of its parts

Add a comment