A different kind of Muppet caper: How AI can embed – and spread – human bias

If your social media feed is anything like mine, the last few weeks have seen a flurry of funny or lighthearted AI-generated images: from real world images transformed into playful Japanese anime, Ghibli Studios-style landscapes to Canadian Prime Ministers as 80s rockstars. I’ve enjoyed them as much as the next person; then all of a sudden, this week one set of AI-created images made me feel a little queasy and it’s made me step back a little and realize that I should be more careful and intentional in my thinking around AI.

The culprit? Muppets? (Muppets, really??) Strangely, yes. My sister sent me a link to a project of a Canadian historian who used AI to generate images of all Canadian provinces and territories as Muppets. As I scrolled through the images, I admittedly had a good chuckle, but as I got closer to the provinces and territories I’m closest to I could feel my body getting a little tense; what would algorithms have taught the image generator a Newfoundlander or Labradorian looked like? Would the Muppet be sitting on an overturned dory and wearing a sou’wester? Or, worse still, would the Nunavut Muppet be a problematic take on an Inuk? Thankfully, neither was true, but it got me thinking about AI, of how it is reliant on the frequency of data, and of how, given that we live in a bias-filled world, that unchecked and unfiltered use of AI can perpetuate harmful stereotypes.

So, I began a little digging.

According to SAS “AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.” Bias can occur in AI because it is humans who 1. Choose what data algorithms draw from, 2. Decide which questions to ask of the data, 3. Decide which AI results are worth listening to, and 4. Decide how results are used. As a result, without the proper checks and balances, it is easy for us humans to embed our unconscious biases in this process. This has the potential for real harm. For example, research has shown that facial recognition software does a much poorer job of correctly identifying the faces of racialized people and women (which has led to wrongful arrest). In addition, mortgage algorithms in the United States have consistently led to higher interest rates for African American and Latinx borrowers with those “homebuyers pay up to half a billion dollars more in interest every year than white borrowers with comparable credit scores” Bias perpetuated through AI can also have massive implications for organizations potentially leading to substantial fines, legal action, and/or severe damage to one’s reputation.

Luckily, there are a number of ways that organizations can help mitigate possible bias in their (our) use of AI by:

  • Being careful and thoughtful in the ways we are putting AI models into use (and educating our employees working with the data what responsible AI is)

  • Hiring more diversely and creating – and maintaining - an atmosphere where employees feel welcomed to speak up when they see bias

  • Establishing a grievance process for when employees – or clients – feel they have been treated unfairly

  • Embracing "humble AI." That is, using AI models that “demonstrate humility when making predictions, so they don't drift into the biased territory”

  • Testing AI programs to identify potential bias

So, as much as AI may help us save us time, responsible use of AI also requires us to slow down, think and act carefully, and ensure that our use of AI isn’t leading us to unintentionally uphold harmful beliefs and systems that we actually want to change.

 (Geesh, how did I get to here from Muppets? :)

By Willow J. Anderson

Previous
Previous

How can I recognize National Indigenous Peoples Day?

Next
Next

“Love Lift Us Up”: Singing the benefits of belonging