Augmenting Reality

31 January 2010
Presented by Ben Valsler, Helen Scales.

The high-tech scanners that can home in on chemicals produced by cancers, how bats and dolphins share genes for echolocation and why barefoot runners have a smoother track record. Also this week, augment your reality: find out how new technologies can add extra information to the way you see the world by making a mobile phone into a virtual tour guide or even a pocket mechanic! Plus, how virtual reality worlds are helping to rehabilitate stroke victims, and, in a theatrical twist, for Kitchen Science Dave discovers the workings of a baffling stage illusion...

In this episode

22:48 - What is Augmented Reality?

Just what is Augmented Reality? Dr Tom Drummond, from Cambridge University's Machine Intelligence Laboratory, joins us to explain more...

What is Augmented Reality?
with Dr Tom Drummond, Cambridge University

Helen -   Dr. Tom Drummond is a Senior Lecturer at the Machine Intelligence Laboratory at Cambridge University where they're working on some of these technologies, and Tom has very kindly come into the studio today to talk to us about augmented reality.  Hi, Tom.  Thanks for coming.

Tom -   Hello.

Helen -   And I think we need to start off with  - what is augmented reality?  It sounds like something out of a sci-fi movie, but what is it?

Tom -   It does sound very science fiction, doesn't it?  It's about taking computer graphics off the computer screen and making them available over the natural world, over the real world.  Now obviously, the real world doesn't have a computer display capability, so you need to put those graphics there somehow.  The first way we thought of doing this was to use a head mounted display - you look through the head mounted display at the world and then a computer can display computer graphics on a part of the world too.

Helen -   So you're looking at the world and you're pushing a layer of information of some sort that refers to that world.

Tom -   That tells you something about what you want to do with the world.

iPhone using the Wikitude application, demonstrating an example of Augmented RealityHelen -   What you're looking at...

Tom -   So you might want to, in a medical application for example, use it in laparoscopic surgery to be able to see what your instrument is doing inside the patient. Where blood vessels are, maybe there's a tumour that you're trying to target or something like that.  So that's one kind of application.  There are obviously entertainment applications.  There are games available now that you use this technology or indeed, there are educational benefits, and so on.

Helen -   It seems to me as I browse around the internet that quite recently, the entertainment and advertising side is really developing quite quickly.  You can have magazines with augmented reality covers - you wave the magazine in front of a computer, and something pops out on it in three dimensions through your webcam.  And it's in sporting events as well...

Tom -   Sure, American football for example.

Helen -   ...and races and things.

Tom -   The first down line is done by augmented reality in this.

Helen -   And that counts as a way of putting information, and advertising as well, into sporting events.  But as you said, there are more worthy and useful applications of this technology as well.  You say you started off thinking about a head mounted way of doing this.  What are the alternatives?

Tom -   Well the thing that we're starting to see now is handheld augmented reality which runs on, for example, a smart phone.  In that version, what you see on the screen of the smart phone is what the camera sees of the world.  It's a bit like having a video camera or a digital camera where you're seeing the preview of the picture.  But what augmented reality does is it traps the graphics in flight between the camera and the screen, and you work out what you're looking at and where it is, and you add the virtual elements to the image at the same time, so that you can blend the piece of information that you want to add to the world over the top of it graphically.

Concept for an Augmented Reality phoneHelen -   So it feels like you're holding up a magic spy glass and you're looking through that, and you're learning something else about what you're looking at.  I could hold it up to you and it might tell me something about you, perhaps...

Tom -   Yes.  You could see my name floating above my head or something like that.

Helen -   I believe you've been looking at the pros and cons of these different approaches of a head mounted system versus something we can put in our pockets.  What are the differences between those two approaches?

Tom -   A head mounted display gives you a very immersive feel.  When you're looking at the world, the computer graphics are right there in front of your eye.  So, there's a very strong connection between the virtual elements and the real elements.  But then there are some negative consequences as well.  It's very difficult to build these systems without latency in them.  So when you move your head, the computer graphics might follow a tenth of a second later.  Unfortunately, one of the consequences of this is that it can make people feel motion sickness and it can be very unpleasant to use a system like this.  Head mounted systems are also very expensive and that could be a barrier to their useand they're also very cumbersome.  You have to put something that gets between you and the world on top of your head, whereas by in contrast, a phone is a small thing.  We all carry it and it has all of the computer hardware inside that you need to run some of these applications.  If there's some latency and the picture takes a tenth of a second to catch up as you move it, nobody really minds because it's not directly affecting what you're seeing, and conflicting with what your inner ear is telling you for example.

Helen -   How are we actually seeing this being used in the real world, outside the laboratory?  One of the possibilities that I thought was rather exciting was the use of these kind of things for tourists - for going to a site, perhaps of a ruin that's fallen down now, holding up your smart phone or perhaps even wearing your tour guide helmet and goggles, and it would recreate what the acropolis looked like when it was full of people or you know, when it was still there.  That seems to me to be quite exciting.  Are we seeing this kind of thing actually being used?

Tom -   Absolutely.  There are applications available now on the iPhone store and on other phones like the Google Android phones that use GPS to locate the smart phone and a compass to work out which direction it's pointing in, and then you can display computer graphics like "this mountain is..." whatever it is or "this building is King's College Chapel...".  These systems are appearing now and I think that they're going to become very popular this year.  In some sense, the limiting factor of those is that GPS and a compass isn't that accurate, and one of the problems is that if you want to draw your labels very precisely over what you're seeing, they tend to jitter around and often, if you look at videos of these systems in action, you can see that the labels are jittering around a bit, relative to the image.

Helen -   So it's not quite pointing it to King's College Chapel.  It's sort of hovering about in the air a bit...

Tom -   Hovering around somewhere nearby.

Helen -   Yes.

Tom -   Now, one of the things that's driven our research into this is using the image that's coming into the smart phone to locate what we're looking at.  If you can work out what every pixel in the image from your camera is looking at, then when you draw the graphics on the screen, you're going to be drawing them roughly to pixel accuracy over the top.  That tends to lead to a much more stable viewing experience and the graphical elements look very stable on the world, and really look like they belong there, which is actually quite important in terms of how the users respond to these extra elements being displayed.

Helen -   I can only imagine, being a humble marine biologist myself, the technology involved in taking a moving image of the real world and incorporating your position on that image must be extremely challenging.  We won't go into the details now, but I'm just wondering;  what are the main problems that you have to overcome to be able to put these images together and use, say a smart phone to shine at something, and tell you what it is?

Tom -   Sure.  Yes, there are a lot of issues.  In particular, smart phones are not the most powerful computers available and so there has to be a lot of effort going into shrinking the algorithms down, so that they can run in the computer capacity of a smart phone.  When you're talking about the data from a camera, there's actually a huge flow of data coming out of the camera of the smart phone.  So it's actually a serious issue to be able to process that in time, to be able to work out where you are and what you're looking at.

Helen -   And finally, I think one thing that seems to me to be very clever use of this is to communicate expertise, to be able to transfer yourself into another place, and almost get someone else's brain on the case.  Can you tell us about that quickly?

Tom -   Sure, yes.  That's one of the systems we developed and really, that came from an occasion where I was phoned up and asked, when a car is out of water, where do I put the water in for the windscreen wipers?  I'm standing there with my eyes closed, trying to picture the engine bay of the car, thinking -  well, at the back on the right, there's a translucent white bottle there somewhere...  And I was thinking, if a person could just take a photo with their phone and send it to me, I could draw an arrow and say, "its here."  And then even better, when that photo goes back to them, when they move their phone, that arrow stays pointing at the image of the water bottle, that would be brilliant!  And in fact, some very clever people in my lab built a system that did exactly that.  So, what it does is it extracts information about what it can see, for example, the engine bay of your car and then in real time, it builds a 3D model of the things that it can see, and it calculates at the same time where the camera is moving, and then all of this information together is used to help a remote expert place information into the scene that will help the local user in solving the problem that they have.

Helen -   Fantastic!  I know, next time I need to refill my water in my car, I would love to have a gadget like that on hand.  Thanks ever so much Tom for giving us a great introduction to the world of augmented reality, explaining how machines can recognize and track reality.  He comes from the Machine Intelligence Laboratory in Cambridge University.

32:51 - Rehabilitation in Virtual Reality

Virtual Reality is a computer simulated version of the real world. Meera Senthilingam has been exploring the use of simulated environments for medical treatment and rehabilitation...

Rehabilitation in Virtual Reality
with Dr. Paul Penn, University of East London

Meera -   This week, I'm at the University of East London which is located in Stratford.  I've come along to their psychology department's virtual reality lab and with me is Dr. Paul Penn, lead researcher of the virtual reality research group here.  Now Paul, some interesting work you've been doing here is using things like virtual reality in order to rehabilitate brain injured patients and stroke patients...

Paul -   Yes, that's right.  We're really interested in looking at VR as a means to improve the lot, in terms of rehabilitation and assessment of people that have suffered a brain injury.  A lot of brain injured patients, as part of their recovery, will spend a great deal of their time completely understimulated and not officially in therapy and things like that.  That's really a problem for patients from a neuropsychological perspective because the way the brain responds to injury is determined by the environment in which you're trying to recover.  So in other words, if you have a very stimulated environment, the chances of you getting better functional outcomes and better recovery from the injury will be that much better.  So what we're really looking at is using VR as a means to enrich environments post brain injury.

Meera -   What aspects of brain function do you focus on with brain injured patients?

Paul -   It tends to be things like memory.  So we'll look at how well people are going to remember route information for example.  We look at how well people can remember objects they see in an environment.  We also look at the way people can remember to do things at some point in the future, what we call prospective memory.  So that's things like remembering to close the front door after you've opened it, remembering to attend appointments at certain times.  Very simple things that we take for granted, but are actually very pervasive in everyday life.  And these are the kind of problems that, because they're so intertwined with everyday life, you can't really assess properly in the lab, and that's what VR gives you.  It gives you that chance to put a real world scenario into the lab.

Meera -   The key point of the virtual environments that you create here are that they can be used simply on laptops and desktops which must therefore make them much more accessible?

Paul -   Yes, absolutely.  This is a really important part of our remit.  So really, one of the fundamental criteria that we use when looking at VR is, will this run on just an average, modest-spec PC.  What we tend to do now is what we call a windows-on-world system where the virtual world just appears on a computer screen.  So, it probably looks like a computer game, like you or I would play on a PC game or a Playstation 3 game.  But the difference is that our environments have a purpose other than entertainment.

Meera -   So now, we've got a laptop setup in front of us and it's got a particular virtual UEL Virtual bungalow_hallenvironment on it which is a virtual bungalow...

Paul -   So what you can see here is a simple, four-room bungalow with a hall.  The patient's task here is to help the person who owns this virtual bungalow move to a larger bungalow.  The person's task is to go through these rooms; first, we've got a hall followed by the lounge, and then allocate furniture to these new rooms.  So they're engaged.  They're searching around the existing bungalow and they're looking for items of furniture that, for example, belong in the hall.  While they're doing this, what we actually have is a series of three memory tasks.  You can think about the removal task as a kind of distraction task, essentially because that's the way memory works in the real world.  It will be very easy if all we ever had to remember is just what we have to remember, but the problem is we have all these distractions around us all the time, and it's how well we can filter out those distractions that actually gets to the crux of the matter.  That's really what this environment is assessing, with the removal task as the distraction task.  What we're actually looking for is how well they can remember to do three different things.  

Now, you might remember that I told you that remembering to do things at some point in the future is what we call prospective memory in psychology.  Broadly speaking, there are three types of prospective memory.  First of all, we have what we call "event-based prospective memory" which is a kind of memory that's precipitated by seeing something in the environment generally.  So, in the example of the virtual bungalow we have here, when the participant has been strolling around the house, they have to look for glass items, and when they see a glass item, they have to remember to put a fragile notice on it.  For the very simple reason that we don't want the removal men  manhandling and breaking it.  The second type of prospective memory is what we call "activity-based".  This kind of thing occurs when you see something in the environment that itself serves as a cue for the memory.  So for example, if you turn an oven on, that's also your cue to turn the oven off again because you performed an action which should then prompt another memory to perform the reverse of that action.  The task we have in the virtual bungalow here is that they have to remember to close the kitchen door after they've opened it.  Simply because we've got a cat, a virtual cat if you like, in there and the cat escapes if they don't.  The third type of prospective memory is what we call time-based.  This is the category of memory whereby you have to remember to do something at a certain time.

Meera -   So if somebody's got a meeting or an appointment, they need to remember to do that...

Paul -   Yes, exactly.  Or remembering to tune in to a radio show for example.  The idea here is the person opens the front door every five minutes to let the removal men in.

Meera -   Having these three tasks in action then when someone's in this environment, what are you specifically looking for?

Paul -   The extent to which people have actually recalled or remembered to perform the tasks.  So; how many times out of the possible three have they remembered to open the front door for the removal men?  How many of the  - I think it's about eight  - items that are fragile do they remember to put fragile notices on?  And by looking at that kind of data, that gives us an idea of their memory profile, and from that you can extrapolate what kind of problems they might have in everyday functioning.

Computer generated voice -   "...Look for items and furniture to be moved into the hall..."

Meera -   I'm just having a go on this now and I'm walking through the hall; it's reasonably easy to move around.  I'm just using the cursor, the arrow keys on a keyboard to move around, and then just mouse buttons in order to open doors, and pick items.  So, I guess that's quite crucial, making it quite easy to use.

Paul -   Yes, the interface is really important obviously because what we're looking at here is potentially using this environment with people who may not just have memory problems.  They might also have problems with their physical mobility or their dexterity.

UEL Virtual Reality KitchenMeera -   Having actually tried this environment out on various stroke patients or brain injured patients, what have you found to be the improvement?

Paul -   What we tend to find with this task, we used it on stroke patients for example, is as you would expect really. They're impaired in all three types of memory, they're particularly impaired on the time-based task.  The time-based task tends to involve what we call self-initiated retrieval in terms of - there are no prompts in the environment.  You have to remember to provide your own prompt which is to look at the clock and we find that people who have had a stroke can often suffer with this kind of behaviour.  It's very, very difficult for them to self-initiate.

Meera -   What can you then do with this information to improve their condition or improve their memory?

Paul -   What this kind of information can do is allow this rehabilitation professional to actually orientate the rehabilitation very precisely, to address the problems that person has.  So for example, if they just have a problem with time-based retrieval, there's a technology you can use like personal organizers, maybe iPods, to actually provide prompts at certain points in the day for critical activities.  What you can do, having interacted with this environment, is get an indication of the kind of prompts that someone will need to offset their memory problems.

Ben -   That was Dr. Paul Penn from the University of East London, taking Meera Senthilingam on a virtual experience, to show how simulated versions of real environments can be used to monitor and rehabilitate patients that have suffered strokes or brain injuries.

40:26 - Augmented Reality in Space

Augmented reality headsets may find a perfect home miles above the surface of Earth, helping astronauts to repair and maintain the International Space Station...

Augmented Reality in Space
with Luis Arguello, European Space Agency

Ben -   Now, when something needs changing or repairing on the International Space Station, astronauts need to open a printed manual or look at a laptop to find out what it is that they need to do.  But that's not necessarily easy to do when everything is floating about or when you're working in a confined space.  So what's the solution?  Well, the European Space Agency are developing a system called WEAR that's short for WEarable Augmented Reality.  It's a headset that superimposes instructions and information onto the thing you're looking at.  Luis Arguello is one of the principal investigators behind the new system, and he joins us on the line now.  Hi, Luis.

Luis -   Hello.  Good evening.

Ben -   So, what's main point behind the WEAR system?

On the ISSLuis -    Well, this is just a tool to help the astronauts to perform their activities onboard with more accuracy and using less time.  The thing is, as you mentioned before, the astronauts are floating in space.  They have to perform different tasks, experiments, maintenance, and they have instructions for things which have been there for a long time - they use paper.  And for once in a time task, they use the laptop to see what they have to do.  And I don't know if you've seen people working in space, but when they move around, they are floating and they have to help themselves to move around with their hands, and restrain with their feet to the floor.  So, it's a bit difficult to look at the laptop or hold the papers.

So the idea is that you have this system that you put on your head and is connected to the computer, and then you talk with it while you perform your task, and then it gives you the instructions of what you are supposed to do.   What we were trying to do is to help the astronauts to identify the small elements within a very complex structure.  So, he doesn't lose too much time, trying to find the little valve he has to open or close or the thing he has to replace.  And so, he can do things more precisely and also saving some time.

Ben -   So, in order to build this, did you need to build special equipment for it or can you actually use kit off the shelf?

Luis -   We did use off the shelf elements, but it was mainly by accident.  I've been collaborating with Frank De Winne, the commander of the space station, for six months.  He's been working with us, with our section in the Space Agency for many years, and I invited him to view the requirements for the first WEAR tool we were developing.  Well, he told us, "I'm going onboard and I'd like to take one with me."  So, usually you develop custom equipment to fly onboard according to the standard safety and to make it more operable onboard.  In this case, we didn't have the time, so we built this prototype, this is only a prototype, to show how helpful it would be to have this kind of system onboard, and also to assess the usability.  We had very little time but we had to do the whole development, safety assessment with NASA, and all of the process of integration testing.  And so, it was very challenging project.

Ben -   So, once they actually put it on and presumably calibrated it for each person, did they find it really helped?

The International Space StationLuis -   The main problem, as you mentioned, is the calibration.  Because the system was off the shelf, we couldn't think much about the adjustment to make it very simple. So it takes a bit of time to get the overlaying of the images according to what you're looking at.  But after the first calibration is done, they find it very convenient to use, and easy to use.  Well, you need some voice training because the system is guided by voice.  We made an assessment, we gave them a questionnaire and this was one of the points -  usability - and he was happy.

Ben -   So, it sounds like there's quite a long way to go.  This was just a prototype with off the shelf kit, but still very promising.  What's your next step?

Luis -   Well, our friend, Frank, the commander of ISS says he's happy with it and he  would like to have a second version.  So, we have more people, more colleagues working on human interfaces onboard.  We work together with NASA because on ISS, we have many partners.  We have Americans, Russians, European, Japanese, so if you want to put something on board, it has to be agreed with all of the people working on ISS.   So now, if we want to use it for good, then we need to improve the next phase of size, calibration, usability, and to try to integrate it more with other systems onboard the ISS - the inventory, management system and user interfaces we have onboard.

Ben -   So, there is definitely a lot of work to do.  Well thank you ever so much for joining us and for sort of filling us in on what could be a very nice new way to stop bits of paper and Haynes manuals essentially floating around the air.  So, thank you very much, Luis.

Luis -   Okay.  Thank you for the invitation.

Ben -   That was Luis Arguello from the ESA, explaining how Wearable Augmented Reality Systems could greatly improve efficiency on the international space station, eventually doing away with the need to have paper manuals floating around.

What’s the point of keeping a nerve cell alive without an axon?

We put this question to Dr Michael Coleman:

Michael - That's a very good question. We cut axons in a culture dish because that's a very well defined beginning of the degeneration period, and it gives us good control over when that degeneration starts. But there's very good evidence now that a similar mechanism of degeneration takes place in several neurodegenerative disorders; motor neuron disease; glaucoma, where pressure in the eye actually causes the axons to degenerate, and probably an Alzheimer's disease too. So, we're using the cutting model as a model for what's happening in the neurodegenerative disorders. A good analogy, going back again to the traffic holdup, it would be the difference between actually closing a motorway, so you totally block the motorway - that will be the cut - and restricting the traffic for example to one lane or some speed limits. That the type of holdup is quite different, but the particular traffic that's affected by that is actually going to be quite similar.

Can augmented reality help with forensic reconstruction?

We put this question to Dr Tom Drummond: Tom - Well, this business of taking the real world and building virtual models of it is something we also work on. In context of computer version, that's called reconstruction. And indeed, one of the things that we do in the lab, for the purposes of producing content for augmented reality, is to have a system whereby you can put an item in front of a webcam, rotate it slowly in front of the camera, and the computer will automatically build an accurate 3-D model of what it's looking at. Now, that's at the small scale, but indeed, people do work on building larger systems for forensic purposes as well. Helen - So there you go. It could actually be real. Excellent. Thanks very much. Ben - Well, that's quite nice to know. It does look to be absolutely incredible whenever they do it on TV. A bit like their incredible ability to reconstruct things from a reflection in a raindrop on a window somewhere, and from that of course, they can read car number plates or get very accurate pictures!

56:37 - How cold can it be before evaporation stops?

When does it make sense to hang washing out on the line? Will it still dry even in low temperatures?

How cold can it be before evaporation stops?

We put this question to John King, from the British Antarctic Survey in Cambridge:

John - Even when it's very cold, washing will still dry, but it may dry so slowly that it really just isn't worth it. The reason washing dries is because water evaporates from it. If a wet surface is in contact with the air, some molecules of water will leave the surface and go into the air, but at the same time, molecules of water vapour from the air will be coming into the surface. Eventually, it will reach some kind of equilibrium where the amount of water leaving the surface is the same as the amount coming in. We then say that the air is saturated with water, and once the air is saturated, no more [net] evaporation can take place. Now, if we look at the basic physics underlying this, we find that the amount of water that air can hold when it's saturated depends very strongly on temperature, and the warmer the air is, the more water it can hold. So, evaporation tends to proceed much more quickly when it's warmer than when it's cold. But even when it's quite cold, as long as the air isn't saturated, your washing will dry, but it may dry very, very slowly, and it may rain before it gets dry! In general, we don't hang washing out to dry in the Antarctic because it is so cold that things would take such a long time to dry. Maybe on a really nice sunny day in the middle of summer, you might get the tea towels dry, or something like that.

Diana - Evaporation does require energy and the warmer the air, the more energy there is to remove dampness from your washing. But as our forum goer, Eric Taylor said, it has more to do with the relative humidity than temperature. So, if you live in a dry but cold area, you might be better off hanging out your washing than if you were in a hot but humid country. Something similar can happen in the Antarctic where, in a region called the Dry Valleys, there is no ice or snow on the ground because what does land there is sublimated directly into vapour.

Add a comment