Alex Liss, Head of Data Science and Analytics at Wunderman Thompson, joined the Data Science for All Community on Thursday, February 4th, to share how we can use AI to Combat Biases.  

Data Science for All: Combating Biases with AI featuring Alex Liss, Head of Data Science and Analytics at Wunderman Thompson.

Wunderman Thompson is a creative marketing and technology transformation agency, helping brands grow at the speed of culture by uniting creativity, data, and technology.  In his role, Alex leads initiatives surrounding Digital Platform Enablement, Customer Journey/Customer Experience Orchestration, and Diversity and Inclusion Efforts with Data Science for Wunderman Thompson’s Midwest region.  

In addition to his role at Wunderman Thompson, Alex is one of the 185 Data Science for All Empowerment Mentors who have volunteered over 3,300 hours helping DS4A Fellows apply the skills they have learned in the classroom to real-world data science problems.  As the first iteration of DS4A / Empowerment reaches its Grand Finale on March 12th, the DS4A Community sat down with Alex to learn how AI can combat bias and understand how data science and analytics skills can be applied to a career in the media/advertising industry.

Read on for a recap of our conversation, including how Wunderman Thompson is leveraging cutting-edge AI language generation models to create more equitable and inclusive media.  Below, you can watch the full recording to learn more about Alex’s Career journey and his experience as a DS4A Mentor. 

In the past few years, we’ve been reading a lot about how AI can amplify or reproduce bias.  In your job, you're actually using AI to detect and correct bias. Without going into any proprietary details, can you explain AI-driven bias detection? 

Since GPT3 (Generative Pre-trained Transformer 3) was released last year, the most advanced language generation model ever has become widely available.  As a company, it seemed like a good opportunity for Wunderman Thompson to evaluate media with language generation models and see what they can do to help us understand bias. 

Our approach uses GPT3 to consume advertising and language from many different industries and many different brands, one ad or one message at a time. Then, GPT3 tells us what kind of person this advertisement seems to be meant for and produces suggestions for alternative language that can be more inclusive of people in a target market.

So far, we’ve been able to identify some interesting patterns of bias that speak to underlying contexts. For example, GPT3 tends to flag a lot of advertisements for being directed at males. For advertisers, this is an important dimension to consider- many products are designed for a more female audience and this male-bias in language may have an adverse impact on the success of an advertising campaign.  

Following these analyses, we will work with a human creative partner and offer a solution to help make more inclusive content or content that is better suited for the advertiser’s audience.  

As data scientists, why is it essential to understand bias, and how can we learn more about the problem?

There's been a lot of scholarship within the AI field around the risks of facial recognition technology misidentifying or misrepresenting different types of groups, for example. What we’ve found is that AI and language generation models, in particular, hold up a kind of ‘Fun House Mirror’ to our own culture and force us to recognize how prevalent and pervasive bias is within our society.

I recommend reading the books Algorithms of Oppression by Safiya Noble, Invisible Women by Caroline Criado-Perez, and Weapons of Math Destruction by Cathy O’Neil.  I also recommend the film Coded Bias which exposes the discrimination within facial recognition algorithms that are now prevalent across all spheres of daily life.  At Wunderman Thompson, Director of Data Science Ilinca Barson recently published a report highlighting gender-based disparities in Google’s Cloud Vision tool, finding that the algorithm made biased assumptions for mask wearers correlating to the wearer’s gender.  All of these really draw attention to the risks within AI and, more generally, the misuse of data to magnify gender and racial inequality.

As you mentioned, there is a lot of scholarship outlining the imperative to create more ethical AI that is more transparent, less biased, and more inclusive of all peoples.  Can you help us connect the dots between academia and industry?  How can Ethical or Unbiased AI be good for business?  

It's a tough question because the status quo we live in is not fair or balanced.  It’s difficult to point to examples where entire companies are suffering due to bias because companies have been built to thrive in a world with built-in biases. 

Over the past few years we've made progress as a collective society recognizing and correcting some of these institutional biases that our society is built upon. As society becomes more equitable, companies will serve themselves well to reflect this change, and AI is a powerful tool for identifying areas for improvement.

Wunderman Thompson has a history of showing the positive impact that good AI can make.  A few years ago, we created a program called #WECOUNTERHATE, a custom language model designed to identify hate speech.  Often, hate speech exists in coded ways where if you're not familiar with the codes, you may be saying or retweeting racist or hateful things.  #WECOUNTERHATE provides users with an alert that their content contains coded hate speech before sharing or posting.   So far, #WECOUNTERHATE has eliminated over 20 million impressions of hate since its launch.

How can individual contributors, especially individual data scientists, spot bias and contribute to a more ethical and equitable future of AI?

If you want to get more hands-on than just reading a book, there are a lot of organizations out there like Algorithmic Justice League, which was heavily involved in the science and scholarship behind the film Coded Bias. I recommend you volunteer or donate to organizations like these.  I also recommend becoming a code mentor if you are a more experienced data scientist.  In this capacity, experienced data scientists can lead projects that draw attention to the potential risks of AI magnifying bias and inequality, teaching more junior data professionals new skills and the importance of ethical AI along the way.

Watch below to learn more about Alex’s day-to-day role and responsibilities, Alex’s experience as a DS4A Mentor, and Alex’s answers to questions from the audience.  

 

About Alex:

Alex’s vision is to use data science to make a positive impact on the human condition. Alex holds a bachelor’s degree in Japanese Language and Literature and an MBA in Business Analytics from NYU Stern. In his spare time, he enjoys mentoring and giving back to the community so that he can share his skills in evaluation and communication with the broader community. 

 

If you are a data scientist or data-driven leader, please consider applying to be a mentor for DS4A Empowerment #2, which kicks off on April 17, 2021.  You can submit your application here.  

Publish date: March 2, 2021