Wed, 21 March 2018
Today on the Salesforce Admins Podcast we’re talking to Kathy Baxter, User Research Architect at Salesforce and previous Salesforce Admins Podcast guest. With Einstein being all the rage, we’re discussing ethics and AI to understand more about what that means for your business.
Join us as we talk about building an ethical culture, creating transparency, and taking action to remove exclusion.
You should subscribe for the full episode, but here are a few takeaways from our conversation with Kathy Baxter.
The biases living inside of our data.
As a Research Architect, Kathy looks at issues that go across clouds. Each cloud has Einstein doing data science work, “but going across all of those different clouds remains the issue of ethics and AI,” Kathy says. “What stands out when I read articles of AI gone wrong is that the creators don’t intend to harm someone,” she says, but we have to realize that algorithms aren’t free from the biases of their creators.
“Unfortunately, we have biases that live in our data,” Kathy explains, “and if we don’t acknowledge that and if we don’t take specific actions to address it then we’re just going to continue to perpetuate them or even make them worse.” To help, she’s done a lot of research to create a set of guidelines that help provide actionable recommendations with Salesforce to implement Einstein while also keeping those biases in check.
The three steps you can take to build ethics into AI.
“The three big categories are first, creating an ethical culture; then being transparent; and then finally taking the action of removing exclusion, whether that’s in your data sets or your algorithms,” Kathy says. For creating an ethical culture, we want to build diverse teams. There’s tons of research out there on why they perform better, “the least of which being that we avoid products gaps because a segment of the population isn’t represented.” Diversity is no longer just about accessibility in the terms that we have always thought about it, whether it’s wheelchair ramps or screen readers, but now it’s about how we think about inclusion in terms of our AI algorithms,” Kathy says, “ethics is a mindset, not a checklist.”
For transparency, you need to allow your customers to have control over their data. The GDPR guidelines for the EU that we went over in our episode with Ian Gotts are a pretty good reason to get moving on this, but it’s more important than just compliance. “Customers need to be able to come in and delete their data, or correct it if we have it wrong,” Kathy says. Once you’ve adjusted your mindset and addressed transparency, you’re ready to take action.
Why Salesforce needs to think about our customers’ customers.
“We at Salesforce can’t just think about Salesforce and we can’t just think about our customers, we have to think about our customers’ customers and all of the individuals that get impacted by a system,” Kathy explains, “and only by thinking about them throughout the entire process can we continually look at see what kind of impact we’re making, and whether it’s the impact we want to make.” Salesforce’s customers own their data, but if that data is biased of skewed it’s not going to respond completely when someone comes and asks a question.
“There are some segments of our population that have traditionally been underserved, and if you don’t understand the cultural context then you may think that the AI is making decisions that are fair,” Kathy says, “but instead you just end up perpetuating this social justice and without even being aware of it.” We need to be willing bring in the larger community to have hard conversations if we want to see progress happen.
We want to remind you that if you love what you hear, or even if you don’t head on over to iTunes and give us a review. It’s super easy to do, and it really helps more Admins find the podcast. Plus, we would really appreciate it.
Love our podcasts?
Subscribe today or review us on iTunes!
Direct download: Interview__Build_Ethics_into_AI_with_Kathy_Baxter.mp3
Category:general -- posted at: 7:22pm PDT