Menu
Creative Innovation 2019 (Ci2019)

Now is the time to act to stop bias in AI

Sunday, 8 April 2018

 

Will Byrne
Fast Company

As decisions made by algorithms come to control more and more aspects of modern life, we need to act swiftly to make sure those decisions are actually fair. As of right now, they’re often not.

The conventional wisdom, often peddled by Silicon Valley, is that when it comes to bias in decision-making, artificial intelligence is the great equalizer. On its face it makes sense: if we delegate complex decisions to AI, it becomes all about the math, cold calculations uncolored by the bias or prejudices we may hold as people.

As we’ve entered the infancy of the AI age, the fallacy in this thinking has revealed itself in some spectacular ways. Google’s first generation of visual AI identified images of people of African descent as gorillas. Voice command software in cars struggled to understand females, while working just fine for males, putting women’s safety in jeopardy. During the 2016 presidential election, Facebook’s algorithms spread fear-stoking lies to its most vulnerable users, allowing a foreign power to meaningfully swing the election of the most powerful office in the world. Even efforts to make AI palatable for consumers have revealed bias. As technologist Kriti Sharma has pointed out, the first wave of virtual assistants reinforced sexist gender roles: those assistants that execute basic tasks (Apple’s Siri, Amazon’s Alexa) have female voices, while more sophisticated problem-solving bots (IBM’s Watson, Microsoft’s Einstein) have male ones.

As with any new technology, artificial intelligence reflects the bias of its creators. Societal bias–the attribution of individuals or groups with distinct traits without any data to back it up–is a stubborn problem that has stymied humans since the dawn of civilization. Our introduction of synthetic intelligence may be making it worse.

THE AGE OF AI IS ALREADY HERE
Still, in a time of economic, political, and ecological upheaval, the whims of a nascent technology may seem an esoteric concern best left to the technologists. So why should we care? First off, AI isn’t just powering voice assistants or recommending binge-worthy Netflix shows–its deciding people’s livelihoods. Second, the technology doesn’t just fail to fix bias, in some cases, it compounds it, all while wrapped in the comforting veneer of “accuracy.” Third, the technology is moving into a new level of sophistication that makes rooting out bias even harder.

Many applications of AI, and those with the most consequential impact on people are barely on the radar of the public. What’s more: even for the people that are affected by these use cases, the footprint of machine intelligence on critical decisions is often invisible, humming quietly beneath the surface. Artificial intelligence is already driving decision-making in a long list of parts of everyday life: loan-worthiness, emergency response, medical diagnosis, job candidate selection, parole determination, criminal punishment, and educator performance.

With this list in hand, understanding algorithmic bias and solving for it becomes a more pressing issue. If AI is going to be the interface between people and critical services they need, how is it going to be fair and inclusive? How is it going to engage and support the marginalized people and the most vulnerable in our society?

WHAT AI DOESN’T KNOW
One of the most popular applications of machine learning is natural language processing, or NLP. This is what allows human language to be “understood” by computers – it powers Siri, but it’s also increasingly how many companies, nonprofits and governments assess people’s needs and deliver services to meet them.
When people speak in dialects, major problems emerge. A recent study out of University of Massachusetts measured how some of the latest NLP algorithms register different dialects and ways of speaking, the tool’s effectiveness dropped precipitously. In the case of African-American vernacular English, spoken by many millions of Americans, the tools most often failed to identify the language as English at all, identifying it more often as Norwegian.

With emergency response, loan worthiness, and job selection operating on NLP systems, this type of gap means that whole groups of people can be excluded completely from critical services.

Imagine: an emergency management system fails to alert human responders to a crisis faced by an African-American community because the AI underpinning the system can’t process the language of those in need in the way a human could. To avoid such cases, training of NLP systems will need to be more inclusive in their training.

The criminal justice system is a realm where there’s no imagination necessary: AI is making and breaking lives, now. In a stunning expose, ProPublica recently reported that courts across the United States are using AI to predict likelihood of future crimes during sentencing, and that it is biased against African-Americans. It was shown to falsely predict future criminality among black people at twice the rate it did for white people. It also did the reverse, underestimating future crimes among white people. The tool–which uses 137 questions, like “was one of your parents ever sent to prison?”– is among the most widely used in risk assessments on future crime. The company that created the software, Northpointe, has refused to reveal the mechanics of the algorithm, citing its proprietary business value.

Another issue is AI’s ability to deepen and even validate bias we already hold. Machine learning is designed to anticipate the result human users expect and then deliver it. This isn’t always good for us.

Filter bubbles–the state of intellectual isolation created by algorithms that present only the perspectives and content that it believes we will like –offer a good example. Most well-known on social media, they have strained the social fabric and polarized the political realm. The fallout has been so intense that Mark Zuckerburg recently went against the wishes of Facebook’s investors, changing their algorithms to facilitate “deeper more meaningful exchange.” He even apologized “for the ways my work was used to divide people rather than bring us together.”
But this reinforcement effect isn’t limited to social media, and it gets more destructive when the process plays out in higher-stakes scenarios, like when an employer is seeking job candidates. Danny Guillory, the of Global Diversity and Inclusion at Autodesk, came upon the issue of while working in the recruiting industry. He offers this example: If you run a search on a professional social network for software engineers, you are most likely to see a first page of results consisting exclusively of caucasian men. As you engage with the profiles of these candidates, and request more, the AI will deliver candidates with similar attributes to the first wave, very likely resulting in more white men. The system will never deliver results that don’t conform to what it believes that the user expects. Through this process, whole groups can be systematically eliminated.

Worse, the machine-driven experience spurs a false sense of security in a holistic and fair search process. As Cathy O’Neill, a leading voice on algorithmic equality, put it to MIT Technology Review, “[Algorithms] replace human processes, but they’re not held to the same standards. People trust them too much.”

OPENING UP THE BLACK BOX
The idea that AI could function unaffected by bias reflects a misunderstanding of how the technology works. All machine intelligence is built upon training data that was – at some point – created by people. Recently, Microsoft introduced “Tay.ai” to the world, a conversational chatbot that would use live interactions on Twitter to get ‘smarter’ in real time. While Tay proved an impressively advanced conversationalist, after 24 hours on Twitter,he was also horribly racist and misogynist.

AI is only as effective as the data it is trained on. In Tay’s case, the machine intelligence is accurately reflecting the prejudices of the people it drew its training from. Machines place no value judgment on the data correlations they are given. It’s just math to them.

Flaws in most AI systems aren’t easy to fix, in large part because they are black boxes: the data goes in and the answer comes out without any explanation for the decision. Compounding the issue is that the most advanced systems are jealously guarded by the firms that create them. This not only poses challenge in determining where bias creeps in, it makes it impossible for the person denied parole or the teacher labeled a low-performer to appeal, because they have no way of understanding how a decision was reached.

What’s more, the window for attacking algorithmic bias may be closing. Recent advances in the technology–deep learning, reinforcement learning, and artificial neural networks–are such that even its designers struggle to trace the logic of how AI knows what it knows. This wave of AI is no longer just inputting data, looking to training, and creating an output, but is rather creating its own new correlations, much like the human brain, in order to make a decision. As this generation of AI takes flight, human bias rooted way back at the start of the training process may be too deeply embedded to fix. This only increases the urgency of the moment.
So what do we do?

A first step is opening up the black box – creating transparency standards, open-sourcing code and making AI less inscrutable. Some pioneers are already hard at work.AI Now, a nonprofit advocating for algorithmic fairness, has proposed a simple principle: when it comes to services for people, if designers can’t explain an algorithm’s decision, you shouldn’t be able to use it. Some in the public sector clearly agree. In December, New York’s city council and mayor passed a bill calling for transparency from all AI used across their vast array of city services, legislation prompted by the ProPublica report on racial bias in criminal sentencing.

New rules are on the way at the international level too, with the EU poised to release and enforce new transparency standards under its “General Data Protection Regulation” this spring.

Others are attacking the problem from within the technology and data science community. OpenAI is a nonprofit creating leading-edge AI systems and open-sourcing the code to the world. A new field called explainable AI has taken root, focused on creating AI systems of the future that can explain the reasoning of their decisions to human users. It’s inevitable that AI will become increasingly inscrutable (it’s already mimicking neural processes of the human brain, after all). But if we lose our ability to parse how machines are arriving at decisions, then we aren’t really in the driver’s seat.

Most important will be achieving diversity of backgrounds in teams designing and architecting AI systems, across race, gender, culture, and socioeconomic background. Anyone paying attention knows about the diversity challenges of the tech sector. Given the clear bias problem and AI’s trajectory to touch all parts of our lives, there’s no more critical place in tech to attack the diversity problem than in AI.

COMPLEX PROBLEMS REQUIRE COMPLEX TEAMS
Currently AI is a rarefied field, exclusive to Ph.D. technologists and mathematicians. Teams of more diverse backgrounds will by nature raise the questions, illuminate the blind spots, check assumptions to ensure such powerful tools are built upon a spectrum of perspectives.
Variety in expertise will also be key in creating machine learning that effectively serves everyone. Engineers and mathematicians aren’t monsters, they’re simply unequipped to build systems that solve problems of cultural or social dimensions. Sociologists, ethicists, psychologists, and humanities experts will need to join the ranks to build systems that can effectively solve for problems of such complexity.

Diverse teams will be better equipped for another crucial step: removing bias from the training data that feeds machine intelligence. This means intentional screening of data to remove biased or limiting correlations (man = office, woman = kitchen, for instance). Teams will need to ensure fair and equal representation in training data across all dimensions of diversity, racial, cultural, gender, linguistic, and more.

Some have even advocated the creation of separate algorithms for use with different groups. A look back to the problem of job candidate software is instructive. As a result of societal bias and lack of equal opportunity, predictors of successful women engineers and predictors of successful male engineers are simply not the same. Creating different algorithms with unique training corpuses for different groups, the argument goes, leaves a minority group less disadvantaged. It’s a controversial idea–akin to a digital affirmative action–but it’s an area where AI systems may actually be able to correct for structural bias that can be all but invisible.

With the U.S. government inactive on the issue, much of the onus for action will fall to business. Tech firms are more likely to act than one might think, if not out of ethical concern, than out of focus on the bottom line. Last week’s recent record-breaking box-office opening of Black Panther, an all-black superhero film, drives the point home: when you exclude or fail to serve under-represented groups, you leave massive profits on the table. The same is true in AI.

Take the financial services sector, one of AI’s first fields of adoption. According to CreditSuisse, women now hold 39% of the United States’ investable assets, controlling over $11 trillion. In fact, women have been shown to be more successful investors than men in longitudinal studies. Despite all this, a recent survey by the Boston Consulting Group showed that of all industries, women are most dissatisfied with financial services. The report concluded that women tend to view finance through a different lens than men –rather than accumulating wealth for its own sake, they view assets as a way to care for their family and ensure security.

Ensuring that AI systems in fintech work better for women than those in-car voice command systems is a clear business prerogative. One new app, called Joy, uses psychological and personality assessments to offer AI-driven financial coaching. It has been so popular among women that the company pivoted to focus exclusively on women as its clients.

SEIZING THE OPPORTUNITY
In a time when nuclear war is a real possibility and democratic institutions are under assault around the world, bias in AI can seem like a luxury concern. But AI is already affecting lives in profound ways, bias has appeared across just about every AI use case, and our opportunity to fix it may be fleeting.

The start of the AI age has had an unexpected effect, casting prejudices in our society in sharp relief. It’s acted as a sort of unsparing digital mirror, revealing difficult truths about us, while also endowing us with a new abilities to address them.

And so we’re at a crossroads: will we seize this opportunity posed by AI to advance progress or will we embrace the fallacy of “accuracy,” only to see AI perpetuate and compound our worst instincts as a society? Machine intelligence will soon seep into all corners of our lives, becoming less visible in the process, and AI’s bias bug will get harder to beat. Our time to act is now.

Read the full article

"Without a doubt the best conference in Australia with a delightful mix of creativity and big thinking bringing together an eclectic group of industry, community and education leaders. Once again, I’ve come away enthused by new ideas, more aware of future trends, and heartened by the kindness of humanity. We all have a role to play and it should be a collective one." Kerry Anderson, Founder of Operation Next Gen

"Conferences are boring. Creative Innovation is more like a rodeo for ideas, and you are guaranteed to leave wanting to make the world better for everyone." Andrew Despi, A.kin

"Kudos to Tania for organising the the 2019 CI Conference in Melbourne - it is the best I have attended so far. All the speakers and sessions were very well-planned. I have attended many conferences and this is the first I did not have the 'luxury' to skip any sessions. Your effort in this space has truly made Melbourne a global innovation and entrepreneurship hub." Danny Ong, Deakin University

"I wanted to pass along how incredible I thought Creative Innovation was and how much your passion for life and for humanity was inspiring to all of us. It was a privilege to be included and one that I am very grateful for." John Pickering, Chief Behavioural Scientist

"Creative Innovation is a transformational platform built by changemakers for changemakers. It is an educational adventure showcasing the forefront of ideas internationally." Melissa Warner, Ci2019 Scholarship Winner & Education Officer, Mind Medicine Australia

"Creative Innovation 2019 was one of the most well-organised events I’ve ever taken part in. From beginning to the end, every detail had been planned through and taken care of. Secondly, are the production values. As a result, it was possible to attend all the sessions without suffering conference burn-out, happy in the knowledge that the next piece of content was guaranteed to delight and challenge the senses." Richard Claydon (Singapore)

"Well done on another successful Ci conference. Your energy, passion and sheer determination to ensure the conference delegates get a fantastic experience is admirable." VP People & Global Capability | P&GC, Woodside Energy

Ci2019 PARTNERSHIP OPPORTUNITIES

There are a host of incredible opportunities to partner with us on this world class innovation event. If you are interested in becoming a partner or creative collaborator for the upcoming conference, please contact:

Tania de Jong AM // Founder and Executive Producer
Tel: +61 (0)3 8679 6000
Email: Tania@creativeuniverse.com.au

Alrick Pagnon
Tel: +61 (0)3 8679 6000
Email: Alrick@creativeuniverse.com.au

WATCH MORE THAN 300 VIDEOS AT CiTV ►
“Remarkable, mind blowing, brilliantly choreographed – easily the best conference ever and I’ve been to plenty. Truly wonderful and amazing.”

Jim Grant, Partner, Dattner Grant

“Thank you for a challenging and rewarding couple of days. The quality of the team you had at the conference was extraordinary! These events will change people and thereby the world.”

Craig Carolan, Director Private Wealth, ANZ

“I came to learn – I came away inspired! Best conference ever.”

Paul Duldig, University of Melbourne