How will AI evolve?
Interview with
Will more data and more sophisticated models mean generative AI continues to improve? Or will we see diminishing returns from this burgeoning field? Mike Pound is an associate professor in computer vision at the University of Nottingham.
Mike - I think it is getting more powerful. Continued development in the research community means that some of these methods are now much more performant than they were before. They produce really, really impressive output. But I also think that the way we use these tools has become easier. So, chat bots, there are websites where you can go and talk to them and that gives people the impression that AI is on the rise even though it's been happening behind the scenes for quite a few years.
Chris - In your view then, is it going to continue on the present trajectory or is it going to be a bit like where we said well, with Moore's law, with the power of computer chips and processors, we are going to reach a point where they're not going to be able to get any better because something begins to hold things back.
Mike - It's something that not every academic agrees on. I think that there is a tendency to assume that because of this rapid rise of, let's say, chatbots, that they're going to only get better and better from now on. I think that the view that many people subscribe to, and in fact the thing I sort of subscribe to at the moment, is we haven't really seen evidence yet that they will continue to get better at quite this rate. We've seen new chatbots emerge, but since then we've seen iterations of these chat bots that don't double the power, they don't triple the power, they just get a bit better each time. I think for a while, unless something drastic changes, we might expect to see iterations and evolutions of these things rather than something that comes along and just blows everything out of the water again and impresses with a whole new set of abilities.
Chris - And when one considers these engines, are they really generalists? As in, as they get more powerful, are they just going to get better and better at doing everything? Like a Swiss army knife does a range of different jobs and it does them well? Or are they very, very specific and they're very good at doing one thing really, really well and there are a few spinoffs that it does half heartedly and we are fooled into thinking it's doing a good job?
Mike - This is the really interesting question: to what extent can we make AI a general purpose tool and to what extent do we have to have a specific AI for a specific role? I think at the moment the very best models are the ones that are specific to a task. In my research I often have a very specific thing that I'm trying to do. For example, I'm trying to diagnose something in a medical image, or I'm trying to identify something in a plant image, and in those times a very specific AI aimed at that task is the thing that will get the best results.
Chris - When I train an AI, if I'm doing a really good job and I want to train it to recognise cancerous moles, I train it to recognise a mole, then I train it to recognise cancerous moles from healthy moles. But it's focusing on moles. I'm not distracting it with pictures of mathematical formulae or daisies and tomatoes.
Mike - Yes, that's exactly right. If you ask a chat bot to diagnose a cancerous mole, it will say something, but that thing is unlikely to be correct because we've never actually explained that problem or explained the biology behind these cancerous moles or anything like this. I think that, in the tech sector, so the big companies that are training these giant models, there's a tendency to believe that if we just double the amount of data or triple the amount of data, we will get better and better models and they'll become so performant that they can do any task; they can do medical imaging, they can do plant imaging, they can analyse our general home photos as well. And actually that doesn't seem to be the case. If you want the very best performance, it's better to have a smaller data set on a specific problem.
Mike - It might be that we do see an uptick in the ability of these models the more data we add. But the resources that training these is taking on the scale of data of the internet is now becoming very, very high. There will be a point, I think, where we have to start making decisions as to whether it's efficient or indeed if we can afford to do it. It costs a lot of money to train these models and if they only get a small performance boost each time you double the size of the data, at some point you're going to decide that's not worth doing.
Chris - Neils Bohr famously said, or is alleged to have said, prediction is very hard, especially when it concerns the future. I'm going to ask you to predict the future. What's going to be the thing we have to work on or solve, then? Where is the next emerging thing with this? What can it do at the moment but not that well or what's the gap we haven't closed? What dots are we going to join next, do you think, to really move this on?
Mike - So I think separately we're going to see people continue to develop bigger versions of these models and we will see what happens there, and we're going to see people continue to develop smaller models for specific tasks and that's a separate thing, but I think that the thing we are really missing at the moment, especially with these largest of models, is having them actually interact with us on a day-to-day basis. People often ask me what the difference is between a smartphone assistant like Siri, or a chat bot like Chat GPT and the answer is, actually, Siri is much more constrained because it actually has to do real things. If you ask Siri to put something in your calendar, it actually has to go and do that and it has to know how to talk to your calendar app. If you ask it to go on the web and search some data, it has to be able to do that.
Mike - Most chatbots don't need to do this. They can just write text that looks nice but hasn't actually had to source any data. It's just what's in the training set. I think it becomes quite a lot more difficult when you have to control systems based on what the output of your AI is because your risks are much higher. Having a chat with your chatbot, if it gives you a bad poem or it writes a bad paragraph, that's not an absolute problem. But if it's controlling your self driving car, that becomes a much bigger problem. And so I think that over the next few years we're going to have to really start to work on how we integrate these systems in a safe way with things that we actually are trying to do day to day.
Chris - Some of us are long enough in the tooth and old enough to remember the .com that rapidly turned into the .bomb bubble. Do you think we're at that sort of precipice again, that this is all over inflating, overhyping and it's going to implode? Or do you think that actually it's in good shape and here to stay?
Mike - In an annoying way, I actually think it's both things at the same time. I think that there are incredible abilities of AI that we've just seen coming in the last few years that are going to transform areas like drug discovery and that are going to have a profound impact. But I also think that there is a lot of hype with these chatbots and people assuming they're doing something that perhaps they're not. When we actually start trying to use these chatbots to do things, we're going to find they need to be much more accurate than they are all the time. That's when it becomes really, really difficult. There will be a bursting of the bubble in the sense that people will get used to what they can and can't do and they'll use them with that in mind, they perhaps might not be quite so hyped as they are today, but I do think that they are incredibly useful tools and they are going to become more and more prevalent in our day-to-day lives.
- Previous AI's impact on society
- Next Is an AI alive?
Comments
Add a comment