The Most Important Trend you Couldn’t see at CES

  • Previous article
  • Next article
Thumbnail

Kieley Taylor, Global Head of Social at GroupM

 

AI’s unconscious bias is hard to identify, but important for us to discuss before "smart" devices become dumb or harmful for some of us. 

 

“Smart” innovations were everywhere at CES in 2020. If you journeyed through Eureka Park, you may have caught a glimpse of a space age home security system from Sunflower Labs complete with computer vision, threat detection, and autonomous video streaming via a drone. At another stop, we saw a next generation health tracker from Aura that was able to assess biometric data to feedback hydration levels and body mass composition in real time. We also stopped by an instantaneous translation tool called TranslateLive that took home a show prize for accessibility.  

All these innovations and many more products across the show floors are reliant upon algorithms and machine learning. What wasn’t on display -- but is of critical importance -- is the need for broad representation within the training set from which the machines learn, and diversity in the perspective of the engineers and programmers. Without considering the potential for unconscious bias, a “smart” health tracker could be calibrated to the male body composition, leading to over or under hydration levels; the “smart” translator could miss the regional nuance of the Québécois speaker; and the “smart” security device could use biased data to exacerbate racial profiling.  

While it may be a challenge to keep the invisible front of mind, I encourage you to interrogate the inputs to your models. Test to see if the machine has learned something that is unexpected or outdated. For example, it is cost effective to teach natural language processing using historical texts that are in the public domain; however, gendered roles have evolved significantly since these authors’ lifetimes. Think twice before bringing the biases of the past into the foundation for our future. 

Once the bias is recognized, we can work to course correct. A great example of this was done in partnership between Snapchat, WPP’s Ogilvy, and S.C. Johnson’s Glade in Saudi Arabia. While a leader in augmented reality, Snapchat had built its facial recognition tools on input from eyes, a nose, and a mouth. In Saudi Arabia this meant leaving niqab wearing women on the sidelines. These companies have since partnered to recognize eye characteristics in addition to a full face to allow more users to engage in the fun. 

Supplementary Materials:

https://www.ogilvy.com/feed/ogilvy-wins-more-golds-lions-on-day-four-of-the-2019-cannes-lions-international-festival-of-creativity/

https://www.forbes.com/sites/grrlscientist/2019/10/22/invisible-women-exposing-data-bias-in-a-world-designed-for-men/#3e7a4b9c3989

http://saudigazette.com.sa/article/560926

https://www.sunflower-labs.com/ 

https://auradevices.io/

https://www.connectwithila.com/ 

  • Previous article
  • Next article