Q&A of the Day – Uber’s rating system & Facial recognition accuracy
Each day I’ll feature a listener question that’s been submitted by one of these methods.
Email: brianmudd@iheartmedia.com
Twitter: @brianmuddradio
Facebook: Brian Mudd https://www.facebook.com/brian.mudd1
Today’s entry...
Two questions/observations. On Uber what is to stop drivers from discriminatory practices coded into passenger ratings? On facial recognition, is the 10% error rate an incorrect ID or an unable to ID. Big difference in implications.
Bottom Line: Two stories I teed up yesterday and both are good questions. Starting with Uber. Uber’s new policy of banning passengers with especially low ratings from drivers does open itself up to potential pitfalls. First, let’s look at why they’re doing it. Driver safety. Uber’s done a lot to improve the safety of its drivers for passengers,but until now hadn’t done anything to attempt to protect drivers. I’m not saying this idea is the magic bean but what they’re trying to do makes some sense. We often talk about the previous warning signs for violent people who harm others after the fact. It’s rare that someone goes from being a model citizen to a violent attacker. This might help Uber catch a potential problem passenger before they harm a driver. I’m willing to give them the benefit of the doubt for now and would want some type of effort made if I were the one driving, for example. But to the crux of your question. Could biased drivers lead to unfair passenger ratings? Certainly possible. It’s not implausible to think a racist driver could rate all passengers they’re prejudiced against poorly based on their race, as an example.
The upshot is this. Similar risks exist in all aspects of our society and there are two motivating factors for Uber to not let this get out of hand. The first is the need for passengers. Uber’s never turned a profit. They’re a long way from turning a profit and need all the passengers and revenue they can get. They’re not positioned to error on the side of banning passengers. I’d expect them to aggressively monitor low ratings of customers by drivers, especially at the point where they’d be banning someone from using the service. Given the technology at their fingertips – it'd be easy for them to pick up on trends from drivers if they existed. The world’s not perfect and his isn’t either but let’s see how it goes. As for the facial recognition question...
To fill you in quickly on the facial recognition question if you missed it. Official government research demonstrates that high level AI is accurate 90%+ of the time in nationwide searches. Now to your question. What about the 10% of the time that’s not accurate?The answer is generally a false positive.That is problematic for the obvious reasons. I think that’s why it’s important to rely on more than AI. But do I think that law enforcement should be able to use it as a tool? Without a doubt. First, if it helps you catch your suspect nine times out of ten that’s a huge win, period. Second, provided you’re responsible with supporting evidence applying it to those identified, the ten percent should be manageable without innocent people being detained. That’s key. This is simply high-tech profiling which has always been an effective law enforcement tactic. It just needs to be handled responsibly. If we can’t trust law enforcement to be responsible generally – we've got much bigger problems than this conversation.