Menu
December 4, 2019 | By

AI IRL Podcast Episode 41: 3 Questions to Avoid Bias in 2020


Subscribe via iTunes, Spotify and more.

It’s that time of year.

Not for holiday decorations and good cheer — although that’s allowed, too — but I’m talking about getting ready for 2020.

By checking your AI for biases.

We think we’re getting better insights out of AI, but in reality biases are lurking behind the scenes.

I got to talk with Vasco Pedro, CEO and Co-Founder at Unbabel, about what organizations can do to recognize biases in AI.

“Right now, there’s this expectation that in customer service, 95% of all customer interactions are going to be driven by AI by 2025,” Vasco said.

At that point, you’ll be pretty much unable to differentiate between bots and humans.

“Right now, only 22% of AI professionals on a global scale are female.” —Vasco Pedro

Understanding the origin of bias

Unbabel is a seamless multilingual AI translator — and if there’s a spot for thinking about unconscious biases, translation is the place.

In Google translation, doctors are male, and nurses are female, obviously. (Not!)

“It assumes that by default,” Vasco explained. “This example shows how our bias reflects the so-called common sense perception of certain roles in society.”

When they’re actually perpetuating it, instead of driving towards a more equal society.

Well, the truth is that bots learn from humans.

Remember the Microsoft bot launched on Twitter? They had to take it down after a couple of days because it was becoming racist based off of human input.

Whereas essentially the same bot in Japan was getting love letters and being nurtured — just based on how it reacted to how humans treated it.

“Same technology, but just set up in different situations that led to very different outcomes,” Vasco said.

Or how an Amazon recruiting tool learned to rank women’s resumes lower than men’s?

The point is that it’s humans, not bots, with the biases.

So, what now?

“Our bias reflects the so-called common sense perception of certain roles in society.” —Vasco Pedro

Mitigating the risk of bias

We’re already seeing the negative outcomes of not taking a more purposeful or disciplined approach in the way we leverage AI.

  • AI models are derived from data.
  • Data reflects the current world.
  • The current world is biased.

How do we take measures to mitigate learning from existing data, much less move toward more equitable data from which AI can learn?

Vasco suggested 3 ways to address the challenge.

#1 Increase the diversity in the team building AI models.

“That tends to increase sensitivity to the issue,” Vasco said. “Right now, only 22% of AI professionals on a global scale are female.”

Build your team specifically for diversity: gender diversity, race diversity, cognitive diversity.

“If there’s not diversity in that team, there’s just going to be more likelihood for that team to not be aware of the bias they’re dealing with,” he said.

#2 Create test systems for AIs.

There are QA systems for developers, databases, and companies, right?

We need the same for different kinds of biases. “You can download a plug-in or a library and run it as part of your test,” Vasco suggested.

You’d immediately get feedback on the likelihood of bias in the model.

“Building these AI frameworks could significantly help in our ability to reduce the likelihood of bias within the different models,” he said.

But it isn’t happening in the industry yet.

#3 Inspire industry-wide change.

So far, there’s not much realization or acceptance that this bias, in the long term, is bad for everything AI is trying to accomplish.

We need cures from a systemic perspective. “Once you have something like that, it’s  encouraging companies to download it and use it,” Vasco said.

“In customer service, 95% of all customer interactions are going to be driven by AI by 2025.” —Vasco Pedro

Questions for 2020

AI models are constantly relearning from human generated data, so having humans in the loop means a higher quality on production versus pure AI models.

“Humans have a lot more context to be aware of, and they have that drive towards, ‘Hey, we don’t want to create a world where biases are perpetuated,’” Vasco said.

  • Do I have the right team structured? You need a diverse team, not just a technical team.
  • Am I putting in the right checks and balances? Being the latest and greatest isn’t as important as being equitable.
  • Can I really trust this data? A good benchmark is testing against a human sample set.

So, in thinking that by 2025, bots and human workers are going to be acting pretty much the same, let’s have AI augment the capacity of humans, with humans having the role of adding value in situations that aren’t clear.

For example, right now you tell a bot, My flight just got canceled and I have to get to my best friend’s wedding.

The bot will say, Okay, let me see what I can do.

But a human will say, Oh, no, I’m so sorry, let me see what I can do.

“It’s the understanding and the nuance of what is the current situation the person is going through — and how you empathize to create a solution that makes me feel confident,” Vasco said.

We’re not just aiming to scrub out bias, we’re aiming for models that try to capture intent and context in a much more subtle way.

The standard is high — and the deadline is 2025.

This discussion with Vasco Pedro was taken from our AI: In Real Life. If you want to hear more AI episodes like this one, check us out on Apple Podcasts here.


Discuss / Read Comments

Leave a Reply

Leave a Reply

Explore our other AI Insights or Recent Posts.

Subscribe to the Bold360 Blog

Add your email to keep up with AI, chatbots, customer engagement tips and more right to your inbox.