Apple, Facebook and Google are in a race to dominate Artificial Intelligence (AI). We think of them today as search engines, computer makers and social media platforms but tomorrow they want to own the tech that powers and makes decisions for the whole world.
But all of the key players have terrible diversity problems. Only 30% of Google’s tech staff are women and only 2% of them are black. Facebook is worse with the same gender split and only 1% of staff are black. Apple is the best of a bad bunch with the same gender split and 7% of tech staff identifying as black. To put that in context, 14.6% of people identified an African American in the 2016 USA census.
This has been an embarrassment for these companies for years but it’s only really the company itself that suffers. The lack of diversity makes it harder for them to develop products that resonate fully with a diverse population and leaves them prone to making stupid mistakes or missing opportunities.
McKinsey research shows that companies in the top quartile for racial and ethnic diversity are 35 percent more likely to have financial returns above their industry median. They also found a linear relationship between racial and ethnic diversity and financial performance – for every 10 percent increase in racial and ethnic diversity on the senior-executive team, earnings before interest and taxes (EBIT) rises 0.8 percent.
The Business case for diversity was proven some time ago. If only someone could convince the guys working in tech?
But the huge progress being made in artificial intelligence (AI), means the impact of this lack of tech diversity could shift from being a problem for the company and its shareholders to a real crisis for society.
This is something that you need to start caring about.
Artificial Intelligence is progressing at a rapid rate. Whether you are aware of it or not, your day is likely already being affected and influenced by AI.
Millions of us interact with Siri and Alexa every day – both of them true AI systems giving us answers to our questions and results to our searches. Our social media feeds are organised and prioritised by AI already – neural networks determine which posts and pictures you see and don’t see.
AI also dictates how you travel. Both Google and Apple use artificial intelligence to interpret thousands of data points to give you real-time traffic data. Uber uses AI to decide on pricing and even which car is allocated to you.
One you look at photos on your phone and see faces matched to people, or unlock your phone with Apple’s FaceID – there is a neural network and AI making that work.
This very article you are reading was written by me (still human) but edited by Grammarly – an AI system that was trained on thousands of documents and attempts to fix the worst of my grammar and punctuation errors.
AI is already everywhere and we’re handing over more power to it every day. Law enforcement bodies in UK and USA are enthusiastic about using AI and already have trials and programmes in place.
AI systems are unemotional but don’t for a second think they are unbiased.
Many people think that AI systems make unemotional decisions based on cold hard facts – that could not be further from the truth. The algorithms are designed by people, with goals and parameters set by people and they are trained on datasets collected and compiled by people. Whilst it’s true that AI systems find patterns in data that humans did not know about, the goals they are looking for and things that they value are very much set by the people who designed them.
The growing problem we have is that gender and racial diversity in machine learning and AI teams is even worse than the problem we have in tech generally.
I’m not suggesting for a second that teams of mostly white, male engineers are trying to do evil or are even standing by consciously while it happens. But non-diverse teams fail to spot and prioritise issues that diverse teams can spot and in this case it’s black faces themselves that they are missing.
WIRED magazine found that whilst 21 per cent of technical roles at Google are filled by women, only 10 per cent of the 641 people working on machine intelligence were female. 22 per cent of Facebook engineers are women but the company’s AI research group shows only 15 percent are female.
When WIRED looked at who had contributed and spoken at three leading machine leading conferences in 2017 they found only 12 per cent were women whilst 88 per cent were men.
AI systems are being given more real power over us every day
Siri and Alexa are fun. But don’t think AI is only being used for innocent stuff like playing music or checking the weather. AI is being used already by governments and law-enforcement bodies.
Facial identification is a key area where AI based solutions are used every day. But with the teams responsible for many of them being non-diverse themselves, they’re building biased and skewed solutions.
In 2016, researchers at the universities of Virginia and Washington found that two large image datasets used by many researchers for AI training contained a skewed view of gender. The collections of images, including one backed by Facebook and Microsoft, taught AI that shopping and laundry was most commonly associated with women.
These are the very datasets being used to train hundreds of independent AI systems. But with too many non-diverse, white male-centric teams behind them no one notices or does anything about the bias. The teams behind AI don’t need to be evil or act with bad intent, they just need to be blind and ignorant.
Facial recognition systems already perform less well for non-whites leading to more cases of mistaken identity.
Earlier this year, a study by researchers at Stanford and MIT found that three commercially available facial recognition systems from major technology companies showed huge skin and gender biases.
In experiments, the error rates of the three systems in determining the gender of light-skinned men were never worse than 0.8 percent. But for darker-skinned women the error rates soared to more than 20 percent in one case and more than 34 percent in the other two.
The findings highlight the huge dangers to society in how our machine intelligence is currently being trained and built. AI systems get their intelligence by processing huge datasets of past information, looking for patterns and then using those patterns to predict future outcomes. When we train the systems on biased data, they learn the bias.
The major technology company behind one of the systems claimed an accuracy rate of more than 97 percent. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white. very simply this makes it better at differentiation male, white faces than female non-white faces. And the people who trained it never noticed.
Whilst guessing gender incorrectly from a face pic might not sound like a disaster, it’s the same underlying technology and pattern matching that is being used to identify suspected terrorists in crowds, criminals in street footage and undesirables in nightclub lines. A study by the FBI in 2012 found that commercially available algorithms consistently had lower matching accuracies on women and black people.
Cases of mistaken identity caused by faulty facial recognition AI are already well known. They range from the annoying – like Natick resident John H. Gass who had to endure 10 days of bureaucratic irritation getting his driving licence reinstated after he was banned because a computer incorrectly matched his face, to the life-changing. American law enforcement departments are adopting familial recognition systems at a rapid rate despite the negative effects on people of colour.
These systems literally have the power to lock people up
Mistaken identity isn’t the only problem. AI is being used today to predict the likelihood of someone committing a crime. Yes you heard that right – if you’ve watched Tom Cruise in Minority Report then it’s chilling, we’re getting into that territory already.
Risk assessments are a key part of the criminal justice system and they influence everything from sentencing to whether someone will get parole. These algorithms, increasingly powered by complex AI systems, have the power to remove someone’s freedom. You don’t get much more powerful than that.
In 2016, ProPublica assessed one of the commercially available risk prediction tools made by Northpointe, Inc. to learn about the underlying accuracy of their algorithm and to test whether it was biased against certain groups.
After studying over 10,000 criminal defendants they found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.
Artificial Intelligence has the potential to improve our lives in so many ways through better healthcare, better public services and better products. But it’s just a tool and whether it is used for good or bad depends exactly on who is holding it, who is setting the goals and who is training it.
We’re already giving AI power over our everyday lives. If we do not urgently fix the diversity problems in our organisations and teams we’re going to see travesties of social justice on an unprecedented scale. The time for talking about diversity and being happy enough with some new initiatives is over.
Now it’s time to work together and make real change with real, effective action.
Joy Buolamwini is a researcher at the MIT Media Lab. In this TEDx talk, she discusses several moments throughout her career when facial recognition software didn’t notice she was even there.
“The demo worked on everybody until it got to me, and you can probably guess it. It couldn’t detect my face,” she said.
Joy thinks that facial recognition software has problems recognising black faces because the code is usually written by white engineers who use pre-existing code libraries, typically written by other white engineers.
As the coder constructs the algorithms, they focus on facial features that may be more visible in one race, but not another. These considerations can stem from previous research on facial recognition techniques and practices, which may have its own biases, or the engineer’s own experiences and understanding. The code that results is geared to focus on white faces, and mostly tested on white subjects.
- Inside Facebook’s fight to beat Google and dominate in AI – Sam Shead in WIRED, November 2018
- When the Robot Doesn’t See Dark Skin – Joy Buolamwini in the New York Times, June 2018
- How white engineers built racist code – and why it’s dangerous for black people – Ali Breland in The Guardian, December 2017
- Why diverse teams are smarter – David Rock & Heidi Grant in Harvard Business Review, November 2016
- Diverse teams feel less comfortable and that’s why they perform better – David Rock, Heidi Grant and Jacqui Grey in Harvard Business Review, September 2016
- Machine Bias – Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner in ProPublica, May 2016