Ukraine using Clearview Facial Recognition AI

AI facial recognition is now a tool in the hands of the Ukrainian government...
21 March 2022

Interview with 

Michael Wooldridge, University of Oxford & Ryan Mac, The New York Times

FRECKLES-FACE

Freckled face

Share

Last month, we delved into the murky waters of facial recognition technology, asking how comfortable we feel trading our anonymity for our alleged safety. Little did we know then how quickly that question would become as pertinent as it is now for the people of Ukraine. Reuters reported last week that American company Clearview AI, which describes itself as the world’s largest facial network, came to them with the news that it was offering its service free of charge to the Ukrainian government. In what capacity they intend to use the technology is, so far, not entirely clear, but Ukrainian government officials have hinted at using it to identify people of interest at military checkpoints and deceased combatants. Clearview, however, is not without its critics. The response to this development has centred as much on the company’s furthering of its own self-interest as it has on the potentially game changing tool now in the hands of the Ukrainian resistance. James Tytko has this report...

James - Clearview AI CEO, Hoan Ton-That, claims that his database contains over 10 billion images of faces and its AI is even better at identifying people than you or I. I spoke with Michael Wooldridge, professor of computer science at the University of Oxford, to understand how this technology works.

Michael - The classic way that people focused on in the early days of trying to get software that could do facial recognition is to try to reduce your face to some kind of series of signature markers. For example, you would measure the distance between your eyes and the triangle that the center of your eyes make with the tip of your nose, and so on. The idea is if you could find the right set of signature markers, then we get a unique signature of your face, and so when we see a picture of your face, assuming we've seen it before, then we'll be able to recognize it. But the second approach is just to use neural networks, which is one of the contemporary AI technologies of the day. What neural networks are very good at is recognizing patterns, recognizing images so you can in neural networks directly to that and that technique is very, very popular and very successful these days. But to make it work, the software has to have seen your picture before. In the case of neural networks, it typically has to have seen lots of examples in order to be able to recognise you in the future. But now of course, we spend our lives providing social media with pictures of us, very carefully labelled with my name and the name of my children and my wife's name and my friend's name and so on. What we are doing is feeding social media algorithms, and potentially other people that we're not quite as comfortable having access to these pictures, with this training data to recognize our faces.

James - Anyone who's used this facial recognition technology to unlock their phone, for example knows it doesn't work every time, right? How accurate is this technology? It can't be immune to making mistakes.

Michael - By no means is it immune to making mistakes. I think it's probably fair to say that current facial recognition technology in tests is basically better on average than people can. Once you've reached that point, I think you've reached a tipping point in terms of the quality of the technology. But as you say, it makes mistakes, and it makes mistakes like all AI technologies in very unpredictable and weird ways. This is one of reasons why incautious use of this technology is something that we should all be concerned about.

James - In light of my discussion with Michael and his warning to air on the side of caution in the use of AI facial recognition, I was keen to learn more about what kind of company Clearview was specifically. Ryan Mac is a tech reporter at the New York Times, who's been following them for a while.

Ryan - The company claims, and I should stress that they're claiming here, that they have about more than 10 billion photos in their database now.

James - And why do you say that? Why do you say it's a claim rather than taking it at face value?

Ryan - I'd say that because in the course of my reporting, myself and others have found plenty of statements made by the company and its CEO, Hoan Ton-That, that don't end up panning out or are in some ways exaggerated. Perhaps the easiest one to point to is in their marketing materials to police early on in their company history, they were claiming that they had 100% accuracy in their facial recognition technology. If you speak to any expert in the field, no one would ever guarantee 100% accuracy. It's just a bad precedent and standard to set yourself to. There's plenty of other things that the company has claimed that haven't panned out. In any case, they're claiming they have 10 billion photos in their database. They've also claimed in this, that they have 2 billion off of 'vkontakte', which is Facebook for Russia, the most popular social network in Russia,

James - Ukrainian vice prime minister Fedorov Mykhailo has spoken about the scenarios in which Clearview might be used, including identifying deceased Russian soldiers and prisoners of war and looking for missing persons. I wondered what might be in it for Clearview.

Ryan - If you look at the history of this company and some of our reporting, their MO is to get this technology out into as many hands as possible. Through these free trials, hopefully they prove themselves, and then get these police departments or government organisations hooked into paying subscription fees every month or year. But also it's a positive story for them, that their tool is being used in a war. You can see how the company would use that and take that as a positive PR hit for them. They're already using it in litigation. They're being sued in the US for violating certain privacy laws. They've already inserted that into their case saying, 'Hey, actually, we're helping in Ukraine. We are doing good here.'

James - Aside from Clearview's less than squeaky clean record, the use of AI facial recognition in wartime is a big point of concern.

Michael - If this software was used on checkpoints by people, guns to try to identify their origin and so on, that isn't a way that I would like this technology to be used. Of course, all of us desperately feel for everybody in Ukraine, but this doesn't feel like a direction that we should be going or a responsible military should be going.

Comments

Add a comment