Computer coding with chatbots
Mike Wooldridge is still with us. I wonder what he thinks of what he’s heard today. Catherine mentioned AI detection software was able to detect the phoney science papers. But is this foolproof?
Michael - It's not foolproof, it's far from foolproof. I think there's an awful lot of work there to do. I think one of the interesting ideas that's out that work's being done on now is the idea that open AI can insert into the text that ChatGPT generates a digital watermark. Something which allows you to be able to analyse a piece of text and tell that it was actually produced by a system. So we don't have that yet, but I think that's a very interesting direction. But at the moment, I think us educators have got a headache right now to be able to identify this. Researchers, when we are looking at abstracts and research papers, it's going to be a challenge in the years ahead. And the real big worry for me is that peer review, which is the process that we use to evaluate scientific contributions, is already under strain. But systems like this might be used just to swamp and overwhelm peer review, where you're just getting an awful lot of very plausible looking reports and papers that are being produced by systems like that. So there's a lot of concern around those issues right now.
James - We haven't had a chance to even talk about the potential for ChatGPT to produce computer code. What are the possibilities there?
Michael - As I already mentioned, essentially the way that these programmes are trained is you just download the entire worldwide web and you train it on that. And in amongst all of that, there's a huge amount of computer codes. The site that we like to use to upload our code to prove how clever we are is called GitHub. And if you go to GitHub, there's tens of thousands, probably millions of of computer programmes that have been uploaded that you can analyse. Computer programming languages like Python are much simpler to understand than human languages like English. They're much, much simpler. They're very well-defined and actually incredibly simple languages to analyse. So it's no surprise that systems like ChatGPT should be quite good at being able to analyse and produce computer code. Where that technology is up to right now is being able to produce relatively short programmes. A tens of lines of computer code, which are kind of very often the useful little tools and utilities that we might use in our computer programming. I don't envisage them being able to produce Microsoft Windows or Microsoft Excel anytime soon. But there are some really fascinating applications of this. One of the most interesting is that ChatGPT can't do arithmetic and it can't do mathematics because that's not what it was designed for. But it can write computer programmes that can do mathematics and arithmetic. In other words, there's a problem that it can't solve itself, but it can write a computer programme to solve that problem. And at this point, I just wish Alan Turing was alive to see this technology. He would love to see this. This would really tickle, I think, his fancy, it's absolutely fascinating from the point of view of computer science.
James - Unbelievable, isn't it? One other thing, Mike, while we've got you that I wanted to ask, because another artificial intelligence technology that seems to be getting better with each passing day is deep fakes and the mind boggles at the possibilities of when we're able to somehow integrate the sophisticated level of ChatGPT with the deep fake software out there as well. Are we at a stage where we almost need to question anything we see online now?
Michael - I think absolutely. I think certainly we're at the point now where you can't trust text that you find on social media and so on. There's just no reason to do it. And this is why it's incredibly important to have providence to know where this text came from with confidence. But computer generated images and videos are not long behind. I mean, this is now very much within sight.