Fighting cyber crime with AI

17 October 2017

Interview with 

Dave Palmer, Darktrace

Share

Globally, crime committed online costs economies and individuals over 3 trillion Dollars per year at the moment, and that’s predicted to more than double within the next 5 years. So can Artificial Intelligence help to combat the threat, or will it help to fuel the fire? Georgia Mills spoke to Dave Palmer, who works with the Cambridge based cybersecurity company Darktrace...

Dave - Darktrace’s interest is can we use advanced mathematics in AI to really replicate the idea of an immune system where we know the normal self of everyone and everything inside of a business and how they all relate to each other? If someone or something's behaving really strangely then we can start to deal with that problem before it gets to the point where millions of credit card details are lost or medical records, or a manufacturing plant gets shut down.

Georgia - How would that work in practice?

Dave - We’re very predictable in how we behave using our smartphones and out laptops and all the different technology that exists within particularly businesses. So by understanding what it means to be me, what it means to be Dave and how I use my email and all the pieces of technology inside of my business, then we can tell if perhaps my laptop's been infected and is starting to hoard data, or communicate with the outside world in a way that suggests it might be under someone else's control, and then we can start to do something about it. Now that could either be telling a human being hey, here’s a problem and you should go a check it out. Or, increasingly, how the cyber security industry's going to be moving into autonomous response, having the machines on our behalf start to deal with problems and slow then down, or even potentially clean them up in the longer term.

Georgia - What’s the machine learning aspect of this technology?

Dave - Imagine a modern business, or even look around wherever you are now, you’ll start to see technology everywhere whether it’s digital phones, smart TVs, video conferencing units and, of course, all the things we take for granted like laptops, and smartphones and data centres and the cloud. There’s an enormous amount of complexity there. It’s not unusual to find in an organisation of 10,000 people that there are probably at least 50,000 pieces of technology as a rule of thumb.

So using the AI techniques to be able to learn what’s normal and really truly understand the relationships between all those technologies and all those people, instead of asking the humans to do it is really very useful indeed. Then the humans can just be told about the things that are interesting instead of having to try and wade through all of that complexity and guess everything that might go wrong.

Georgia - You mentioned Darktrace spots unusual activities straight away if something’s not quite right, but is there a way to sort of block the holes before they’re entered in the first place? Is it possible to use machine learning to have a hackable-proof system?

Dave - I’m very cautious about saying yes to that given where we are in a society under considerable digital attack at the moment. I think the things that’s really hard about cyber security is there isn’t a perfect answer on what secure looks like. Every part of our digital life is based on millions, if not billions, of lines of code written by different people from all over the world, and different companies, and supply chains that are very deep indeed. So the idea of using machines to go through and evaluate the riskiness of every single line of code and piece of software that we kind of take for granted in the interactions that we have on a daily basis, I think it’s quite far away. I think we need to have made an awful lot of progress on AI before it’s smart enough to do that. Getting much closer to artificial general intelligence than the artificial narrow intelligences we have today.

But that said, I think AI will start changing everything in the cyber security sector. It think there will be replacements for the antivirus that we all run on our laptops with something that’s AI enhanced and better at stopping bad stuff.

Georgia - Does the technology make it easier or more difficult for people like you to protect our data?

Dave - We can definitely expect AI to start making spam much more effective than it’s been in the past. There’s a really quick example: imagine my laptop got hacked and a piece of AI software on my laptop was able to train itself on all of my emails, my calendar, my imessages, my whatsapp, it would then be able to individually communicate with all the people in my life replicating my communications style to spread itself. So you and I have a shared diary appointment to talk today, perhaps it sends a little note to you saying oh, I have some questions, could you have a look at this attachment and let me know what you think. I think you’d probably open that email because it’s going to sound like it's from me and it’s going to be contextually relevant.

Comments

Add a comment