Salesforce Admins Podcast

Today on the Salesforce Admins Podcast, it’s another deep dive with Josh Birk as he talks to Bobby Brill, Senior Director of Product for Einstein Discovery.

Join us as we chat about how you can use Model Builder to harness the power of AI with clicks, not code.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Bobby Brill.

What is Model Builder?

Bobby started his career at Salesforce in Customer Success before working on Wave Analytics. These days, he’s the Senior Director of Product for Einstein Discovery, and he’s here to talk about what Model Builder can do for your business.

If you have Data Cloud, then you already have access to Model Builder via the Einstein Studio Tab. With it, you can create predictive models with clicks, not code, using AI to look through your data and generate actionable insights. As Bobby says, the AI isn’t really the interesting part—it’s how you can use it as a tool to solve your business problems.

BYOM - Build Your Own Model

In traditional machine learning, models are trained on data to identify successful and unsuccessful trends, which is fundamental for making accurate predictions. For example, if you want to create an opportunity scoring model, you need to point it to the data you have on which leads converted and which leads didn’t pan out.

Model Builder lets you do just that, building your own model based on the data in your org. What’s more, it fits seamlessly into the structures admins already understand. We can put our opportunity scoring model into a flow to sort high-scoring leads into a priority queue. And we can do all of this with clicks, not code.

Building a predictive model that’s good enough

Einstein’s LLM capabilities offer even more possibilities when it comes to using your data with Model Builder. You can process unstructured texts like chats or emails to do something like measure if a customer is becoming unhappy. And you can plug that into a flow to do something to fix it.

One thing that Bobby points out is that building a model is an iterative process. If you have 100% accuracy, you haven’t really created a predictive model so much as a business rule. As long as the impact of a wrong decision is manageable, it’s OK to build something that’s good enough and know that it will improve over time.

There’s a lot more great stuff from Bobby about how to build predictive models and what’s coming next, so be sure to listen to the full episode. And don’t forget to subscribe to hear more from the Salesforce Admins Podcast.

Podcast swag

Learn more

 

Admin Trailblazers Group

Social

 

Full show transcript

Josh:
Hello, everybody. Your guest host Josh Birk here. Today we are going to talk to Bobby Brill about Model Builder, which is going to allow you to create your own predictive and generative models to use within Salesforce. So without further delay, let's head on over to Bobby.
All right, today on the show we welcome Bobby Brill to talk about Model Builder. Do you prefer Robert, Bob, Bobby? What do you like to go by?

Bobby:
It's an excellent question. So I'm a junior. My dad is Robert Howard Brill Sr. I have the same first middle and last name. He goes by Robert, Rob, or Bob, so I've always been Bobby my whole life.

Josh:
Yeah, I feel you. My brother is Peter. My father was a Carl Peter and my grandfather was a Carl Peter.

Bobby:
Wow.

Josh:
Got very confusing sometimes. Yeah, yeah. So introduce yourself a little bit to the crowd. What do you do at Salesforce?

Bobby:
That's a great question. I've been at Salesforce almost 13 years. I was a customer of Salesforce for about three and a half years prior to joining, so I've been in the ecosystem for quite some time.

Josh:
Got it.

Bobby:
I started off in actually customer success group, actually it was called Customers for Life. So I worked with customers getting on boarded onto Salesforce. I joined the product team back in 2015 in analytics, so we had this thing called Wave Analytics. So even well before AI I've been working with data. The last year I've actually been part of the data cloud team, so I do AI for data cloud, so it's called Model Builder.

Josh:
Got it. Got it. Were you interested in AI before it blew up, before it got all big?

Bobby:
Am I interested in AI? I think it's interesting. I think it's really cool technology, but what I really like is how the technology can help our customers solve their business problems. I was a customer, I understood what it was like to just have this tool available and put my data in and what can I do with that data. What I like is showing customers how AI can help them achieve their business goals. I really focus on how the AI helps business goals versus really caring about all the new technology and all the new models that are out. I've got other people that do that. I focus in on how are these models going to be used.

Josh:
Chasing solutions and not trends.

Bobby:
Correct.

Josh:
Like it. Now, before we get into the product, one other question, I just like to ask people this because in technology I find the answers are so varied, was software something you always wanted to get into?

Bobby:
Yes. I actually had a computer science degree, so I was writing software. What I realized is, while writing software is fun, I actually really like to debug software still, what I really enjoy is coming up with the ideas of what software should do or how it can help solve problems. Product management has really been the thing for me. When I started Salesforce, I just wanted to get into the company any way I could, so I didn't try for a product manager position-

Josh:
Got it.

Bobby:
... but the second I got in, I had to figure out how to get to this position.

Josh:
I like it. From a very high level, what is your elevator pitch for Model Builder?

Bobby:
Okay, elevator pitch for Model Builder is build predictive models with clicks, not code. It started with actually predictive models. Now that GenAI is available, it's utilize custom, predictive, or generative models with clicks, not code.

Josh:
Okay. Now, when we say model, how do you describe that within the input and output of how we interact with an AI?

Bobby:
That's a great question, I don't think anyone's really asked me this specifically. But I think the way I would best describe it is a model is just a function. You first want to know what do you want that function to do. You have to understand what that function is capable of doing. AI is only as good as what the model is capable of doing. So in traditional machine learning, you would have a model that perhaps could tell you what is the likelihood of this lead to convert. And how did it understand that? Well, it had to get some examples of what did conversion look like, give me some leads that successfully converted, give me some lead that didn't, so the model can understand what are the trends for a successful outcome or a non-successful outcome.
That was traditional machine learning. You'd have to train your model. Now, large language models are really good at putting sentences together. It understood text, it's read so much text, it's trained on that, and it knows when it sees certain words, here's the potential. It can predict the next word and the next sets of words to come out. And so if you think of models as just it's a function and you're going to give it some input and it's going to give you an output, what that function can do is totally dependent and there's so many different use cases. But that's I think how I would best describe a model, is it's a function.

Josh:
Gotcha. Now let's talk a little bit about building models with clicks, not code. I'm trying to think of the right way to ask this. Let's start with what's your basic user scenario of something that they're going to try to build?

Bobby:
So thankfully when we're talking about models, it's all around business data. We are a company that sells to businesses. They put their data in our systems, and while a model can do lots of things, we try to focus on what are the things that our customers are likely doing. The easiest one, Salesforce has had Sales Cloud the longest, so you would build an opportunity scoring model. And that is nothing more than a predictive model that understands what are the traits that go towards an opportunity that's going to close, or win I guess, versus an opportunity that's going to lose. That's probably the simplest thing, and this is what machine learning has really done over the past probably 20 years. People have been solving this problem forever. But every single customer wants this, and they want to make sure that it's trained on their data. They use Salesforce because they can fully customize how they want that data to be stored, what object. They're going to have relationships across other objects. It's not going to be everything in an opportunity object. It's going to be across multiple things.
And they want to make sure it's their data. So why they don't want to use an out-of-the-box model is they don't know what goes into that. Some people like that, but our large enterprises, they like to understand what goes in that. So by giving our customers control and just saying, "Tell us where this data is," we will then go train that model, and we can predict the likelihood of an opportunity closing or take Service Cloud, predict the likelihood of a case escalating or processes, business processes are really important, predict the time it's going to take for an opportunity to close or go from stage one to stage two or service case from the time it was created till when we think it's going to be predicted resolution. These are all things that I think are bread and butter to Salesforce and things that they can predict. And then again, that's your traditional machine learning, that's where you're going to need to use your data to train that model.

Josh:
I think it's very interesting because as you say, this isn't a brand new problem, these are questions people have had and have tried to answer,. Right now I'm imagining the world's worst formula field that's trying to connect 17 different data points and make a guess about the probability of an opportunity closing.

Bobby:
Exactly, yes.

Josh:
How would you describe the level of precision that you're seeing from Model Builder these days?

Bobby:
The level precision depends on the data. Some models can be really accurate, but if you have a predictive model that's 100% accurate, then it's not a predictive model, it's some business rule. You've basically told the model, "Look at this field. When you see this value, 100% of the time it's going to be a converted opportunity or... " Sorry, I guess a closed one opportunity. "And when you see this variable, it's always going to be a loss." So there's a lot of times this is data leakage. This is very common in machine learning where you introduce something that basically the model just looks at that and it's like, "I know what I'm going to do." So you never want it to be perfectly accurate. And then there's other levels of accuracy. You could say that, "60% accurate, is that good enough?" Well, it's better than a coin flip, so you are already getting some uplift.

Josh:
Right.

Bobby:
So then really it's up to the business to figure out what is the impact of a wrong prediction. And a lot of times the businesses, they know the impact of that wrong prediction. If it helps you prioritize the things faster, great. Then start with something that's, let's say, 60% accurate and then work towards something that is a little bit more accurate. It's an iterative process, so try to not be afraid of doing those iterations because you can get some uplift.

Josh:
Yeah. I'm going to ask you the world's most leading question, but it's something that we keep trying to get people to think about, because when it comes to the data that the model is going to leverage, there's size, but there's also quality. How important is the concept of clean data to getting that prediction model?

Bobby:
Clean data is very important to getting a good model. However, I don't think there's any company that thinks they have clean data. They all think their data's terrible. I think if you were to look at Salesforce, I mean, we know the data really well here, and I wouldn't say that it's clean. But I think you could argue that you have enough clean data to train these models. So it really depends on the use case.

Josh:
Got it.

Bobby:
If we're talking about sales data, you probably have a lot. Service data, that's probably the cleanest data out there. Service processes are very much you got to get the data in, you work on SLAs, there's very much these touch points. That data is really good. So if you ever want to try something like, "Where is my data the cleanest?" I guarantee you it's service. Sale is people don't enter things in right.

Josh:
Okay, so I really love that messaging because it's not that cleanliness isn't important, but you don't need perfection to start using these tools.

Bobby:
Right. And then I will say that with generative AI and the ability to process a lot of, I'm going to call it, unstructured text, let's say chats or email, and getting information out of that is perfect for actually cleaning up your data or even putting it into a predictive model. Then the next thing is layer these two things together. There's going to be a data cleanup. You're going to be training a model, but then when you're actually delivering the predictions, you don't want to have to worry about cleaning up that data. That's where the LLMs can be used. Something comes in, you get a signal that says, "Hey, this customer, let's say, their sentiment is dropping." Well, how do you know their sentiment is dropping? Because an LLM is figuring it out and saying, "The customer's not happy," and the models are really good at understanding that, which is pretty cool.

Josh:
That's a really interesting point. I've actually not tried to really consider that before, because to an LLM, let's take one of the most common data quality problems, if you have, say, redundant fields or you have duplicate data, the LLM isn't actually as worried about that as, say, a standard reporting tool would be.

Bobby:
That's correct. Yep.

Josh:
So I think we've been touching on it as we've been talking about this, but where do you see the role of an admin being when it comes to constructing and maintaining these models?

Bobby:
The best part about Model Builder, which I haven't even talked about, is how it integrates back with Salesforce. What we've tried to do is we give you this tool where you can build this really, really good function. It's got an input, it's got an output. As long as admins understand that there's going to be some inputs needing to go into this and it's going to have an output, you can actually put the models wherever you want. The same way that you're building, let's say, a flow, and within the flow there's a decision tree, admins know how that works really well, or even the admins that can write a little bit of Apex code. So as long as you know how to do that, as long as you know how these different Salesforce functions are coming together, models are going to be just another input to that.
So take a flow with a decision. Perhaps it's a case escalation... Or no, let's not even take case escalation, let's talk about leads and lead prioritization. You built a flow, you want to put leads in the right queue. Well, what if before you even put a lead into a queue you can predict the likelihood that this lead is going to be converted. You can say, "Hey, everything that has a score between, let's say, 80 and 100, 100 being the highest score, maybe you want to route that to a special queue. You understand queues, you understand decision trees in a flow. So now all you need to know is, hey, I have this score. How do I get the score? Maybe there's another team that figured that out or maybe you were comfortable enough because you know the data to actually train that model. Now you can just use this as a decision. You don't have to actually show the prediction to anyone. Who cares if that prediction is written anywhere. You don't need that for anything, you just need it at that point.
So admins should start thinking about this just says another function. I think flow is a great way to look at it because a process. Something goes through step one, you do this, step two, you do that, and so on. And that model might be just part of that process.

Josh:
When you're saying that's interacting with flow, are you saying that it's like I have a custom object, I have a custom field, I can make a decision tree based on that? Is it that same level of implementation?

Bobby:
It could be. You don't have to write that prediction out anywhere. We can actually generate it live within that flow. So let's say a lead comes in, you kick off a flow, so you have a lead form, the lead comes in, it goes through a flow. You're not sure where that lead is going to go. You technically I guess created the record, and then you want to figure out where does this lead go.
Well, you don't need to score it and write that score back to the lead. You can actually within the flow call our model endpoint so we can get an on-demand prediction. We're going to give you that on-demand prediction and we can route it somewhere. What's really cool within a flow, you can also call LLM models. So perhaps the lead comes in, you have some unstructured text, maybe you care about sentiment, maybe you want to understand what's the intent from some texts, an LLM theoretically can go do that. And then you get the output of that LLM and you pass that into the model. Now we know more about this lead or this person and then make a prediction, then file the lead away in a queue. That prediction becomes sort of, I'm going to call it, throwaway. You don't need to use it anywhere.

Josh:
Got it. It's a fun rule [inaudible 00:15:42]. You get the data-

Bobby:
Correct.

Josh:
... on demand and then... Yes, gotcha. Now I think we just touched on sort of two different forms of Model Builder. We have the clicks, declarative generate your own model, and then we have bringing in an LLM, and I think this is what we keep referring to as bring your own model. What does bring your own model look like and what kind of models are we supporting there?

Bobby:
That's a good question. When I talked about what's the value of Model Builder, my elevator pitch, it was all about building stuff with clicks. And that's because we're really allowing all of our customers to have access to this stuff. But the reality is there's only so much you can do with clicks and then the rest you're going to do with code. We have this idea of bring your own model, whether it's a predictive model or an LLM. You're just connecting these models that live in your infrastructure, customer's infrastructure, whether it's SageMaker, Vertex, Databricks, or maybe it's your Azure OpenAI model, or maybe it's your Google Gemini model. We're giving you the ability to just connect these models directly to Salesforce so you can operationalize them the same way that you would as if it was built on the platform. So you'd have full platform functionality, but the models themselves, they're not hosted in Salesforce.
So there's all kinds of things you can do with that. Your data science team can make sure that they have full control. Let's say they fine tune the LLM so it talks specifically in your brand language, for instance. That's a use case.

Josh:
Gotcha.

Bobby:
We want to give customers the ability to do this on the platform as well. So the same clicks, not code, we want to bring that to LLMs. That's a future thing. We want to give that capability.

Josh:
I'm going to make a comparison here, and I'm going to be a little controversial to my artist friends who I've had these arguments with, but I know artists who have actually built their own LLM model based on their own art, and then they're treating these models as their little AI buddy to try different things very quickly and then kind of motivates them in a very specific direction. Is that a quality comparison to what you're seeing people are doing when building their own models?

Bobby:
It's a good question. I don't know that that's... Well, I don't know. I think what we are seeing, so brand voice for sure is something that people want an LLM to do. They put sentences together really well, but if you are distributing anything to your customers, you want to make sure that the sentences that are generated are on point to how you would speak as if it was a regular person. So fine-tuning that with specific words and phrases, that's what we're starting to see some customers do with their own LLM. But we're also seeing that there's other techniques, retrieval augmented generation, or RAG. People call it RAG, which I feel like is a... I can't just say RAG without saying retrieval augmented generation to customers because I don't want to be looked at like I'm a crazy. But then also-

Josh:
It is a sort of unfortunate acronym. You're correct, yeah.

Bobby:
It is. But I guess it's getting common, so I'm correcting it... Or not correcting, I'm not saying the full thing as often. But we're finding that that is another approach to not having to train those models. I think research is still out on which is the most effective mechanism because you can say at the time that you want that LLM to process something, say, here's some examples. So you don't have to train because training an LLM is pretty expensive right now.

Josh:
Yeah. Both from a quantity and processing point of view, right?

Bobby:
That's correct.

Josh:
Yeah. Take that one step further for me. How does RAG change the game a little bit?

Bobby:
With the ability to quickly find some examples of things that you're looking for... Okay, let's say you are replying back to a customer for customer service, you want to automate it. So customer asks a question, and the LLM obviously can't really answer the question unless you provide it some information. So you could give it some knowledge right away. So first, find some similar cases, find the resolution of these cases, and summarize that and go back to the customer. So simply by searching for certain resolutions and responding back or summarizing those resolutions, you already have brand voice because those resolutions themselves we're assuming that was all typed in by someone who understands how you're supposed to respond back. And then, let's say the LLM responds back, it's already similar, and that gets recorded as the resolution. Now the next you're responding back it already sort of knew how to respond and the next time if you're searching for similar things, you'll probably get the same kind of response back. Did that make sense?

Josh:
It did. It did actually. Yeah. It's like, as a fishing analogy, you're fishing in the sea that you already have. You're bringing in examples that have already been contextualized within your data and you're just like, "Go ahead and just start there." Does that sound accurate?

Bobby:
Exactly. Yeah, that's exactly it. And then the other thing is, as you're responding back you could... Because when you're talking to an LLM, you have to generate this prompt. I know this isn't part of the subject here, but Prompt Builder is another great tool that's on top of Model Builder where you basically tell the LLM what you want do. You program that LLM, and you can insert the retrieval augmented generation wherever you want. It's like you're building an email template and you're just saying, "Hey, here's some similar cases." And then around that within the Prompt Builder you can say, "Here are some examples summarized like this."

Josh:
Got it.

Bobby:
So you're using this LLM as if it's an assistant that can go do something for you and you give it a bunch of instructions and you put that all in one place. It's pretty cool.

Josh:
Yeah, no, it's okay. I still get another nickel if we say Prompt Builder, so it's a good advertisement.

Bobby:
Perfect.

Josh:
And on that note, so I'm thinking of the Prompt Builder interface and where you build out the prompt and then over on the right we've got like, "Here are the models you can use." So are we going to make that portion transparent to Model Builder and be like, "Oh, hey, my data science team created this specialized model based on our marketing, our brand, our voice. Please use that instead of, say, OpenAPI... or OpenAI 3.0 or something like that."

Bobby:
Yep. So that's actually in there today. So if you're using Prompt Builder, when you look at the models, there's a drop-down. The default is going to be the standard model. There's a drop-down there, you can change that to custom models. Once you change that to custom models, it's any other thing that shows up in Model Builder.
So this could just be, let's say, OpenAI 3.5 turbo and you've configured it slightly, you've changed the temperature, one of those parameters. We have a model playground that allows you to do that. Or it's a LLM that you brought in. So whether it's the ones that we have, GA today or the ones that are coming, it's a model that's your own and you have full control. So then that just shows up in Prompt Builder and you build the prompt. In the future, we're looking at how to, I don't know, give you more controls over which LLM should show up in Prompt Builder versus the ones that you don't want to have show up. So while today it's everything, we know that our customers want that finer granularity, so we're thinking about that.

Josh:
Got it. Well, let's touch on those two points. What is availability for Model Builder looking like today?

Bobby:
Model Builder, it's actually packaged with Data Cloud. So if you have Data Cloud... I didn't say buy Data Cloud because Data Cloud is now being packaged with many different things. It's a consumption-based pricing model, so this is new to a lot of our customers. But what's cool about doing a consumption-based model for pricing is this tool can just be there. We want Data Cloud to be an extension of the platform, just like you're building custom objects and things like that, we want Data Cloud to be as easy as that. It's just there for everyone. It's a tab called Einstein Studio within Data Cloud. That name may or may not change in the future, so just bear with me if it does. I know we're talking about Model Builder and we have a tab called Einstein Studio, and we like to say Einstein 1 Copilot Studio.
I love marketing at Salesforce. It's fun because it changes and I'm like, "I got to just go with it." So Einstein Studio, it's packaged with Data Cloud. So you get Data Cloud, you go into the Data Cloud app, you find Einstein Studio. But it's just a tab. So just like you can find the Reports tab and any app that you want, you can put Einstein Studio in whatever app you want. So if you're an admin, it's just a tab, you'll find it. It's only there if you have Data Cloud turned on in your org, but that is currently how it's packaged. If that's the future, whether it changes, who knows?

Josh:
Who knows? I do feel like if there's one thing our audience has learned if they've been in the Salesforce ecosystem for even half a second, is that all things might change. They might change their name, they might change their location, they might change their pricing. So if you're listening to this and you're interested, please check out your health documents or talk to your account executive. Speaking of things that might change, anything on the roadmap you want to give a shout-out to?

Bobby:
Model Builder itself, I mean, there's lots of things we're doing with Model Builder just in this release. Actually here, this is really important, for all you admins out there, we are working as fast as we can to get features out. We are no longer on the Salesforce three-release cycle. We are going to be coming out with stuff on some monthly cycle. You're going to see that across all AI. You're going to see that across Data Cloud. We're coming out with things just on a different cycle, so please bear with us. I know how difficult it is even to keep up with our three releases, so just bear with us.
We, in fact, have a release coming up very soon with Model Builder for some of the predictive AI stuff. We're making it easier so that you can build models with clicks even easier than you could before. I would say there's nothing earth-shattering there, but we're making it easier. You're going to see a lot more LLMs that you can bring. You're also going to see a lot more default LLMs, ones that are just shipped. We have a handful of models today from OpenAI and Azure OpenAI. You're going to start to see ones from other vendors as well. So they're just going to show up, everyone just has access to it.

Josh:
Got it.

Bobby:
And configuring those models within flows and prompts and all these things, it's just going to get a lot easier. So please bear with us. Keep up with the release notes because release notes are only three times a year. We're just updating release notes mid-release, which is weird.

Josh:
Got it.

Bobby:
Trust me, I know this is weird because I've been around a long time and I keep asking myself, "Should we be doing this?" And you know what? We're doing it, so here we are.

Josh:
Not to panic anybody, it feels like a fundamental change that Salesforce might be evolving to in the long run. So everybody obviously can keep your eyes on admin.salesforce.com, and we will try to keep you in the loop as those changes make. And Bobby, do we have Trailhead content on this?

Bobby:
Yes. In fact, we just came out with a Trailhead for Model Builder, just the predictive model piece. I think there's some coming for LLMs in the future, but just the predictive model piece that just shipped, so take a look.

Josh:
Sounds great. Bobby, thank you so much for the great conversation and information. That was a lot of fun.

Bobby:
Absolutely. Thanks for having me.

Josh:
Once again, I want to thank Bobby for joining us and telling us all the great things about Model Builder. Now, if you want to learn more about Model Builder and of course Salesforce in general, head on over to admin.salesforce.com, where you can hear more of this show, and also, of course, our friend Trailhead for learning about the Salesforce platform. Once again, everybody, thank you for listening, and we'll talk to you next week.



Direct download: What_Are_the_Key_Features_of_Salesforces_Model_Builder_.mp3
Category:general -- posted at: 1:00am PDT

Today on the Salesforce Admins Podcast, we talk to Mehmet Orun, GM and Data Strategist at PeerNova.

Join us as we chat about why data health is easier than you think and what you can do to get started.

You should subscribe for the full episode, but here are a few takeaways from our conversation with Mehmet Orun.

Healthy data drives business outcomes

We talk a lot about getting your data ready for AI, but there’s a simpler question you need to ask yourself: is your data driving business outcomes? After all, AI insights are only as good as the data they’re based on.

That’s why I’ve been looking forward to this episode with Mehmet Orun. He recently gave a presentation about all this and more, entitled “Harnessing AI: Strategic Planning & Data Best Practices for Salesforce Success,” and I was able to grab him for a quick conversation how you can improve data health in your org.

Questions for a foundational data health check

If you’re cooking, you want to make sure that you have the basic ingredients and enough space on your countertop. And the same is true with your org. You need to have your data health squared away before you can cook up something tasty.

For Mehmet, a foundational data health check starts with asking three questions:

  1. Do you have any objects that are close to or past their limits?

  2. Are you retaining too much data in your CRM that you don’t use?

  3. Do you have unintentional duplicates in your solution and do you know where they come from?

You want to zero in on which data matters for which specific business need. You don’t need it to be perfect, you just need a solution that is good enough to do what you want it to do.

How to get started with data cleanup

Every org is going to have some duplicates, and Mehmet recommends thinking through a few things about how data works in your business before you merge everything. Is there a business reason to have duplicate records? Do you have other information in objects or fields that can help you decide whether to match or merge?

Above all, Mehmet wants you to know that obtaining good data health in your org isn’t as difficult or time consuming as it sounds. There are free data profiling tools on AppExchange that can help you get most of the way there. So what are you waiting for?

There’s a lot more great stuff from Mehmet about what to look for when you’re doing a data health checkup, so be sure to listen to the full episode. And don’t forget to subscribe to hear more from the Salesforce Admins Podcast.

Podcast swag

Learn more

 

Admin Trailblazers Group

Social

 

Full show transcript

Mike:
We talk a lot about data readiness and getting ready for AI, but let's take a step back. Is your data really driving business outcomes? So that's what we're going to talk about today on the podcast, and I am bringing in Mehmet Orun, who is the GM and Data Strategist at PeerNova. I mean, just looking through his LinkedIn profile, he has a ton of publications and a ton of patents. I actually don't think I've ever had anybody on the podcast that has had patents. And I should have asked him about that. So spoiler, I don't ask him about patents. But we're going to talk about getting your data ready to drive business outcomes. You know what? Even if you're not ready to use AI, this is still a good podcast for it. So with that, let's get Mehmet on the podcast. So Mehmet, welcome to the podcast.

Mehmet:
Thank you, Mike. It's a true pleasure to be here.

Mike:
Yeah, well, you ran into colleague of mine at World Tour London. And well, I mean everybody's talking to AI and you're talking to AI and data. But before we get into that, why don't you give me a little bit of a brief history of how you got into the Salesforce ecosystem?

Mehmet:
So before I was a partner, I was a Salesforce employee. Before I was an employee, I was a customer. I worked for Genentech, which is a biotech company, for a period of time. And what was interesting about Genentech was our CEO was a scientist. We looked at problems like they were clinical trials. You formed a hypothesis. In a safe way, you chose to assess if that hypothesis was going to be true or not. And then we would look at how can we solve it at greater scale. What that meant was when we were getting ready to launch a new set of products, and the enterprise architecture was going to be shifting from 150 or so disconnected applications, this is 20 years ago by the way, and the story today may sound much the same for many customers and companies, we wanted to bet on a new CRM solution, rather than the homegrown or the older technology ones.
And Genentech became the first life sciences company to chose Salesforce. Because the idea of not needing to spend time just working on an upgrade, rather than solving business problems, made a lot of sense to us. There were a few challenges, like a contact model didn't really work for life sciences, because we are really engaging with a doctor or a prescriber who may teach at a university hospital, they may see patients at a different facility, they may have their own practice. By the way, this is why person account was born. If you're curious about the trivia, happy to dive into the details.

Mike:
You need one of those shirts. "I'm the reason person accounts exist."

Mehmet:
Yeah, I'm not sure how popular it may be, but maybe I'll submit to shirt force.

Mike:
Yeah, you never know. Might try. So I've had Liz Helengo on the podcast, we've talked about data quality. And you have a great presentation out there, Best Practices for AI Ready Salesforce Data. Do you think people's Salesforce data is AI ready?

Mehmet:
From what I have seen, and I do engage with many organizations still, neither the data nor metadata is AI ready, vast majority of the time. Now the question of readiness is interesting because it depends on how far you want to go. What is it that you're trying to solve or accomplish? If you just want to see if you can get recommendations, it's a proof of technology, great. You can definitely use it. If you're trying to get consistent answers based on reliable data, and make sure it is behind the trust layer, at a minimum, organizations need to do an assessment of the current state of their data and metadata, and make sure that their architecture is going to meet their needs, not just today, but on an ongoing basis.

Mike:
One of the questions that you ask, and I think this is pretty paramount, because anytime we talk about data and data cleanliness, is oh, I've got to look at everything. And there could be some objects that have two 300 fields on it. Lord knows why, but there's a lot of fields, right? Because we're capturing everything. One of the things that you point out, is how do I know if it's good enough data to drive business outcomes? And I think that's second part, that clarifying part, is really important. Because when we're looking at data, yes we need to look at everything. But what is the data that we really need to have perfected to drive a business outcome? So what should admins be looking at?

Mehmet:
Before diving into data that matters to business outcomes, one of the things I suggest is what is the foundational data health of your org in general? And I use cooking or dance analogies. Usually I'm [inaudible 00:05:27].

Mike:
I use cooking analogies too.

Mehmet:
Great. So if I'm getting ready to cook a big meal, I want to make sure I have the right ingredients, and the ingredients I have are also fresh. They haven't passed their expired date. I want to make sure that I have enough space on my countertop. Not everything has to be cleaned, not everything has to be put away. I don't need to have every single ingredient up there, but I need to have just enough. So when I mean a foundational data health check, we should always know, do we have new objects that are close to or past their limits. You mentioned two or 300 fields. I have seen 900 custom fields, which is the upper limit.

Mike:
I was trying to be nice.

Mehmet:
Salesforce platform is incredibly flexible. We can add packages from AppExchange, which we install [inaudible 00:06:21] custom fields at times. And then after a while processes change, people change, new people come in, we stop using fields that we used to. Or perhaps fields were added, but we weren't quite sure what they were going to be. User adoption head gaps. I think you can find many parts of this, but if your org is more than five years old, your foundational objects, account contact case opportunity probably have 25% of custom fields or more that haven't been used in the current last one or two years. So one aspects of foundational data health is about understand if any of your objects are nearing or at their limits. Number two is are you retaining too much data in your CRM org, because that is going to be part of what data you want to act on.
If you have rubbish data or if you have data that has outlived its usefulness, archiving solutions are great. And the third piece to be mindful of is do you have unintentional versus intentional duplicates in your solution. Just looking at those three areas is going to give you a sense of data consistency, data completeness, data relevance risks. Once we look at that, then it is a matter of looking at what is the fields that matter, What is the data that matters to add specific business need at a point in time. I'm happy to dive into more details, but do you have any questions on the foundational data health outline I just gave?

Mike:
Well, I think you mentioned duplicates. So I'm an admin, I'm looking at my data, and I find duplicates. Where should I start talking to understand are these intentional and good, are these intentional and bad, do I need to deduplicate? What are the types of questions, who are the stakeholders that I should be looking at to understand if we should have duplicates in our system, let alone not even talking about looking at other systems?

Mehmet:
Yeah. When I talk to people, let's say that you're an admin for... You can make up a scenario.

Mike:
Sure.

Mehmet:
So how did you find out about the duplicate problem, and can you describe to me what is the problem these records are causing on your end users? The reason I start with that question is I am listening for the answer that is telling me whether stakeholder impact is well understood, and what is the nature of that impact that can really help drive the type of solution we could put in place. Time to value is something that's going to be quite important, as well as seeking to avoid nonreversible fixes. Because many solutions are not going to be 100% right. Especially when it comes to match [inaudible 00:09:23] type scenarios.
A common challenge is let's say that it's a call center operation and we have a lot of context, but the data is distributed, which means it may be out of state, information may be incomplete, I would often ask the question, "So what if regardless of how many duplicates you have, every single record you click on shows the exact same transactional history? Would that solve your business need?" Or if it's a marketing challenge and they are concerned about consent and compliance, and they are unsure about which of these values should we pick, I would ask a question, "That's great. Do you have the policies in place on how would you approach these different related records?" And the question that I get incomplete answers most often, is, "Do you know why you have these duplicates, and if you are supposed to have some of these duplicates by design?"

Mike:
I can only imagine the look on people's faces when you ask that.

Mehmet:
Well, their examples help. I've written a few articles on that and I sent people pictures, and asked them how this could relate to their line of business. One of the things I love about how Salesforce talks about solutions is they put a person in the middle surround by the icons of the era. Every industry can use that mindset and think about their interaction with an individual or with an organization. The reality is, whether you're a nonprofit, whether you're a consumer company, you are a B2B company, you are likely to encounter the same individual or same organization in more than one business context.

Mike:
Yeah, Very true.

Mehmet:
There's a high risk in being overzealous in approaching duplicates, that I worked for Salesforce in the past, I worked for Genentech in the past, I work for PeerNova now, I'm involved in the trailblazer community. I Mehmet as a single human being, have at least four different business contexts in my engagement and relationship. So if you try to combine and merge all four of the records into one, first off, which email address do you take or keep, given the nature of the CRM data model? But what is some of the interactions were contact specific, account specific? Are we going to introduce more risks or would we be better off recognizing all of these records are associated with one person, and then use the contact record when it makes sense, use the individual record when that makes sense? Is the example helpful?

Mike:
No, it is, because I think that's what a lot of people run into, is you run reports and you look at the data very, I don't want to say abstractly, but you try to look at it very black and white, and say, "Well, there is four Mehmets, so we should merge them. There shouldn't be four." But you bring up a very important point, is the associated, let's say account for this person, really brings context to what you were discussing with that person at the time. Which is lost when you merge it all. Because to your point, all of those activities would just merge together, and it's like it wouldn't make sense. It's like, so we talked to them one minute about partnering and the next minute about this, and it's like, wait, why was this happening? And you're losing the context of where this individual contact was employed at. So I think that's important. Those are the questions that people have to have, is yes, that is one person four times, but the context of what was our relationship with them is very important.

Mehmet:
One of the other aspects is when we're looking at the records. I think people jump into, "Oh, I know they are duplicates because they have the same name and email, or they have the same name and address, or they have the same name and LinkedIn profile. Whatever it may be." It is incredibly important to look at the object as a whole, to look at the fields as a whole, for three reasons. There may be fields, record types, types, some other custom field for classification, that actually indicate this person, this organization is playing a different role. That may be the basis of what else to include in a match role. So if the context is different, you may want to match them, but you may not want to merge them. Or you still want to match them, but you want to create a unified profile in data cloud.
Number two, there may be other fields that you can use, that increases matchability of that particular record. When I talk about account matching, I often say account matching is not a string matching problem. You are not trying to match Salesforce to salesforce.com or Salesforce Inc. What you are trying to do is understand Salesforce in San Francisco at 1 Market Street, which is the old address, is the same location as the new headquarters. Salesforce in Bishopsgate, London is part of Salesforce corporate hierarchy, but it's a distinct entity and subsidiary. By the way, Slack in San Francisco, completely different name, is also a legitimate distinct but related account record.
If you don't have the depths of the B2B domain, let's say that you're a new admin, but you profile your account object, you may discover there are other fields that are not standard fields. They were brought in by a managed package, let's say D&B connect or BVD Connect, but then you see fields like dance number, global ultimate dance number, that have a high population rate, but low distinct rate. Maybe you can use these fields as part of your match rules also, and discover that you have a lot more attributes at your disposal than just name and contact points.

Mike:
Right. Yeah, it's really diving deeper.

Mehmet:
Absolutely. And the third and final reason, and I ask this question to everyone, "Mike, what is your favorite fake email address or phone number?"

Mike:
I can't tell you.

Mehmet:
Without exception, every single org I've analyzed, either had invalid or fake contact points in it. What is invalid? Maybe it is sales@companyname.com, or supportedcompanyname.com. They didn't have an email address, it was a required field. They just put a group email, or perhaps they put their own. Na@na.com. Noemail@noemail.com. If we do not discover the data content that may also throw off our match result, not only we may over merge where the contacts needed to be separate, we may actually incorrectly match and merge accounts and contacts the way they should never be. So we started this from duplicate management. I know the session is for data reliability and not just for AI. At the end of the day, we want to discover what is knowable with statistical techniques with data profiling, as much as we can. And once we determine that we want to define what an experiment would look like, how would we know for certain is this the outcome we're looking for, and then drive it forward?

Mike:
Yeah. No, you're right. One thing you bring up, and I'm going to ask, maybe it's a bit of a facetious question, but I'd be curious what your answer is. Do most organizations have someone responsible for data quality?

Mehmet:
I think most organizations have someone that cares about data quality, but that doesn't mean they're necessarily responsible or empowered.

Mike:
What's the difference?

Mehmet:
I have been in orgs where let's say there's a data quality manager, it's an independent role, it reports to the business, sounds great, but it is outside of the org hierarchy where the CRM administrator is reporting into. Even if they get long, if the CRM administrator cannot act on requests, unless it is associated with a specific project task, there tends to be delays or friction. Because I don't see a lot of organizations saying we need to launch a data quality initiative. Most of the initiatives are business initiatives where data quality assessments, verification, and as needed improvement should be a part of it. But if your job is to ensure data quality is good, if you are not authorized to be able to initiate projects that can then be prioritized, you may not even be able to get an AppExchange package installed in a quick and timely manner.
Now on the flip side, you may be an admin and you have the rights and you are close to the system. You may not know that there are tools and techniques out there that helps you discover whether that field that was so urgent that you just edit and rolled out, is being used at all. Tracking user adoption of fields be rolled out, pick list values be rolled out, is something admins ideally would and should do if they're informed by effective techniques, and if their [inaudible 00:19:26] allow them to not just add a field but put in place the processes to monitor the usage of that field.
Honestly, one of the reasons I'm most excited to be on this podcast is to be able to talk about these things being not only possible. But fairly easy and not time-consuming. So we can broaden the conversation on how do we make sure the good work admins put in is actually being impactful. And admins can even be more empowered to monitor what is being used, what is not being used, what is being used poorly. To be able to raise these to their stakeholders and drive that level of awareness, so they're being more impactful on any line of business.

Mike:
No, I understand. Okay, so if I'm hearing this, depends on how big my backlog is and my requests are for new features, in your opinion, how much time should admins be spending ensuring data quality is happening in their org?

Mehmet:
I don't know that I can answer that with number of days or percentage of time, as opposed to when should they look at their quality and act accordingly. Because each work is going to be a little bit different. One of the things I believe in, is if I'm a new admin, and you mentioned this earlier, she has a great LinkedIn post she did on what is the first thing you look at as a admin in a new org, and the answer is very broadly, "I like looking at for the foundational objects, what can I tell about the usage in current plus one year versus the life of the object?" That's a starting point data profiling scenario for me. And the reason is when I look at accounts, contacts, opportunities, and cases alone, or let's throw in leads for good measure, it's going to give me a sense of how well adapted is this org.
It's a really good baseline. I want to know what fields are not used or no longer used, what fields appear to be used but not really used, because they only have the default value. The number of times I see 100% populated fields with one and only one value, is pretty significant. And to me that means it either is driving code somehow, or someone has set up a field with a default value and never looked at it again. I then look at what is my foundational health, and with the right tools on AppExchange, you can get much of these insights in a single business day. Then you have the ability to have a conversation with your manager, with your stakeholder, that is about starting the job and having an understanding of the foundational health. The other piece I look at, is if I'm starting on a new project, and my role as an admin is supporting the needs of that project, I'm going to focus on a scenario that is specifically for that.
Maybe we have HR cases, customer cases, and partner cases in our org, but this project is just about customer cases. I'm going to want to look at what can I tell about the cases that are coming in that have caused successfully or unsuccessfully, whatever is the definition for my business. And I want to look at the fields that are being consistently populated with high fidelity, and then compare the difference between successful and unsuccessful outcomes. The way admins can minimize the amount of time they are spending analyzing data, is reports are great, but creating reports just on field rates are incredibly time-consuming and not scalable. There are great free data profiling tools that are 100% native on AppExchange. Start with one and start running different scenarios to see what you can tell about the state of your data. And then the best way to make sure that you don't have to keep checking is, set up and monitoring scenario.
Salesforce CFJ has been talking about the importance of profiling, cleanup, and monitoring for a long time. When I go to roundtables, I see almost no one monitoring their data reliability. And with flow, with the right profiling tools, it is something you can very easily configure, and detect deviations whether your sale rates are going down, or you used to capture an active pick list value, it's no longer being picked up. Send a targeted alert based on understanding the fields that matter to a particular outcome. And I think three to six months, from the beginning of this journey, people are going to start noticing a higher level of either user or admin engagement.

Mike:
Yeah. I also like you point out the idea of a data owner. I think that's important. That's something that admins when they're meeting with stakeholders, can sit down and really kind of empower one or two, maybe multiple people, within a team, along with the stakeholder to really kind of be the overseer of that data. And these can be that next level admin, maybe people that are looking to move up into the organize, and take ownership of that. I think that's a powerful idea.

Mehmet:
And the nice thing about what you mentioned, is it could be an admin who wants to increase their scope and impact. It could be somebody in a line of business. At Salesforce, for example, the owner for account and opportunity fits within sales operations.

Mike:
Makes sense.

Mehmet:
Yeah, because that is where you're going to be closer to it. And if I recall, the ownership for contact and lead, set in the marketing organization. Because that is where you're wanting to make sure you have a holistic understanding from lead to contact, and you're also being consistent and compliant. For shared entities or when you are starting new, an admin would make sense, especially admins that are close to their business, and know what data matters or not. It is about increasing impact. And for anyone that doesn't know, you can capture data owner along with data sensitivity as part of your object manager and CRM metadata. A lot of people do not seem these attributes exist.

Mike:
Yeah. As we kind of wrap things up, we talked a lot about the doing. And sometimes ironically we get caught up on the doing, and we forget to actually look at what the goal is. And so how do you define success when you're doing data cleanup? I mean, I'm sure there's multiple ways to define it, but what are things that admin should look at in terms of creating that definition of success so that they can show progress to the stakeholders that they're making their way towards AI ready data?

Mehmet:
I was lucky to have a mentor that would say, "Unless you can define how you're going to demonstrate success at the end of your project, you're not going to start working on it." Now, we don't always have that luxury, but part of it is to be able to say what do we need to demonstrate differently. We started the conversation with duplicate management. And if people are seeing too many duplicates, and the concern is inconsistent data when they look at one record versus the other, perhaps the definition of success is they see consistent, complete, correct data. Which makes it not about merging anymore, by the way. It's about data consistency and correctness, which is what is impacting the end users. And if you think about it that way, we can now start taking about hiding records over time. Because everyone is already looking at information the same way, rather than taking the riskier task of merging records and then worry about, "Can I on unmerge?"
If it's about an AI outcome, how would we know and users are going to be able to rely on the information? AI is not just one flavor of technology. We have deterministic solutions, we have probabilistic solutions, right? We have Einstein Discovery as well as Einstein Copilot. So at the end of the day, can we define a process that is human repeatable, to then demonstrate how this is being automated at scale? This is one of the things that AI is very good at. If it is going to be about judgment calls, an AI may or may not be as good at it, so we need to look at what is that feedback loop that can also be provided back to an admin. And sometimes data readiness is about having just the right data and just the right metadata you need for completeness sake. Einstein Copilot leverages the field description metadata in finding what fields to look at for information. Sensitivity classifications are also important.
And sometimes you need to add a few additional fields in order to inform what AI could do for you. Just last week, gave a brief presentation on what if we can leverage copilot to inform end users that while their opportunity cost probability is at 75%, because as you move it along the stage, it updates the probability percentage, AI could tell you when that opportunity was actually at risk. It says, actually your risk of closing on time was 50%, 75%, whatever it may be. The idea of adding formula fields that most admins know about, to assess record level data quality, is something you can actually define, and then feed into your prompts, so you can look at information completeness at the record level, and then use that to inform your end users. The key message here is sometimes data completeness is about knowing what to remove, and sometimes it's about knowing what to add. And it all has to be about specific business use cases and specific business outcomes at the point of customer engagement.

Mike:
Yeah. Oh, there's never a simple answer, is there?

Mehmet:
Rarely, and I think this is what makes this a fulfilling journey. None of us have all the answers, but there are positive patterns and anti-patterns out there. I love reading the admin blog and listening to this podcast, I love reading articles on Salesforce Ben, going to Trailblazer community events, and in person get togethers. Because we shared stories, we complain, but then we make suggestions on, "Have you considered this way of approaching it?" And this is how we keep learning and how we keep being better.

Mike:
Yeah, I would agree. I think it's much like when earlier this year I talked to David about puzzle solving, and sometimes it's like you literally just have to sit down, put the puzzle down, give your brain a break, and then come back to it refreshed, and with a different perspective, and that changes everything. So I would agree. Mehmet, thanks for coming on the podcast and talking about a different perspective to AI readiness, than what I've already covered. Because I feel there's a lot to cover, so I appreciate you sharing your insights, and getting us hopefully AI ready.

Mehmet:
As you said, it is a journey. I hope this conversation helps all the listeners on what are some of the things to consider, right? It is not visual, we are not pointing to a roadmap. Much of this is really a mindset. And if anyone is curious about furthering the conversation, I am happy to be a part of that conversation. Feel free to reach out to me.

Mike:
So that was a fun conversation with Mehmet. I love the idea of a data owner. I don't know why I haven't thought of that. Somebody that works with the stakeholders in every department, and kind of owns the data, right? It's like when you get a puppy, making sure that somebody is always going to keep their bowl of kibble full.
I guess the kibble is the data in this scenario. That's the best I can come up with, but I really like that idea. I think that's something that we and Salesforce administrators are doing our quarterly check-ins with our stakeholders, and talking about business objectives. I think that's something we should start bringing up, and really having that conversation even with the larger organization, as we branch out and maybe bring in data cloud, and have the conversations with IT. Data owners. That's the next thing we need to be talking about. But anyway, if you love this episode, and I did, I thought it was great. Because it's more than just really reducing duplicates and figuring out good data and bad data, as you heard. But let's go ahead and just share this episode. You do me a favor, just share it. Just click share in whatever podcasting app you're listening to, and then that way you can send it to your friends who are maybe thinking about doing some data stuff.
I promise you everybody's doing data cleanup. Now, Mehmet mentioned some things and some links. I'll be sure to put those in the show notes as always. And of course, if you enjoyed this episode, there's tons more episodes. Everything can be found admin.salesforce.com, which is just your one stop for everything Salesforce admin, including a transcript of the show. Now, if you want to join the conversation, there is the Admin Trailblazer group, and that of course is in the Trailblazer community. Of course, the link is in the show notes there. So with that, until next week, all of you data fans, I will see you in the cloud.

 



Direct download: Understanding_the_Importance_of_Data_Health_in_Salesforce.mp3
Category:general -- posted at: 1:00am PDT

Why Mentorship is Crucial in the Salesforce Ecosystem

 

Today on the Salesforce Admins Podcast, we talk to Warren Walters, Salesforce MVP and host of the Salesforce Mentor YouTube channel and website.

 

Join us as we chat about what admins and devs can learn from each other and why everyone can learn to code.

 

You should subscribe for the full episode, but here are a few takeaways from our conversation with Warren Walter.

The rise of the Admin-eloper

If you’ve ever taken a peek at Warren’s content, you may have noticed that a lot of it is about learning how to code in Apex. So why have him on a podcast for admins? That’s dev stuff, right?

 

Warren has noted that there's an increasing convergence between these two roles. Personally, I've gained confidence in implementing code because AI assists in clarifying the processes involved. Similarly, for developers, using declarative tools such as flows and formulas can be much simpler than crafting solutions in Apex.

 

In short, we’re all becoming admin-elopers.

Why Salesforce Admins should learn to code

One of the biggest misconceptions that Warren wants to dispel is that only geniuses can understand coding. The truth is that some of the best developers he knows are people who never went to school for it and taught themselves everything they know.

 

As an admin, you don’t necessarily need to know how to build complex Apex customizations. A basic working knowledge of how programming works can get you far, especially when combined with all the declarative tools at your disposal.

Soft skills can help you build your career

Finally, Warren emphasizes the importance of honing your soft skills. A self-described introvert, he’s found that focusing on becoming a better communicator has helped him find his way into new roles and bigger opportunities.

 

He also urges you to think about your personal branding or, as he puts it, “how you want to present yourself to the outside world.” His YouTube channel has opened doors for him, but even something as simple as a portfolio can really help you stand out from the crowd.

 

There’s a lot more great stuff from Warren about his experience as a consultant and as a mentor, so be sure to listen to the full episode. And don’t forget to subscribe to hear more from the Salesforce Admins Podcast.

 

Podcast swag

Learn more

 

Admin Trailblazers Group

Social

Full show transcript

Mike Gerholdt:
This week on the Salesforce Admins Podcast, we are talking about mentorship and learning how to code. Surprisingly, not surprisingly, because admins and developers need to know the best practices for creating our apps and deploying the best technology for our organizations.
So I'm going to bring on Warren Walters who is a Salesforce consultant. He's an admin, he's a developer, he's a mentor and a self-described general geek. Now, Warren's on because he runs a really cool YouTube channel, and I came across his TikToks where he does Salesforce tutorials to help you understand and master the concept of different things in Salesforce.
He has this really cool site, salesforcementor.com, and just a really fun guy to talk about in terms of the world of mentorship, what a lot of skills are that he's seeing, and things that people should be paying attention to.
Now, before we get Warren on the podcast, I just want to make sure that whatever you're using to listen to the Salesforce Admins podcast, make sure you hit that follow or subscribe button because then new episodes will show up on your phone or on your computer right away. So with that, let's get to our conversation with Warren.
So Warren, welcome to the podcast.

Warren Walters:
Well, hey Mike, I'm happy to be here. Super excited because I've been listening to the podcast for such a long time and I'm finally on it, which is, I don't know if it's a dream come true or an honor, but I'm just happy to be here.

Mike Gerholdt:
It's destiny.

Warren Walters:
I'll take that.

Mike Gerholdt:
That's what I'll call it, it's destiny. Well, I ran across your TikToks when I was posting stuff about the podcast and really loved some of the videos that you're doing and the topics you're talking about. So let's just start off with what you do in the Salesforce ecosystem and how you got started.

Warren Walters:
Sure. So my name is Warren Walters. I am a Salesforce engineer. I do lots and lots of development. I probably talk too much about development. Some of you may or may not have seen my face on YouTube, and that's where I primarily host a lot of my content.
And just from my side, I've been in development for about 10 years now. Various different companies, various types of companies to consulting ISVs in-House. And more recently, I've been focusing on a lot of mentorship and training in the Salesforce development space. So that's a little bit about me. I can dive deeper depending on where you want to go.

Mike Gerholdt:
Well, I think the mentorship part is intriguing. You said development a lot in this is admin podcast, but we kind of all live in the same space now. I think what's interesting is when I started doing Salesforce things back in 2006, there was a clear line between here's things I can do with the UI.
Drag-and-drop GUI was a thing. Oh my God, it's WYSIWYG now, that was the new acronym back in '06. But then there was also really hard things that you had to learn. I remember going across to another part of my organization and talking to a developer who had to learn Python, how to deploy stuff.
So there was code and there was the hard way of doing things, and there was the unhard way of doing things as people looked at it. Now those lines seem to be blurred. I mean, I'm looking at some of the data cloud stuff that we're coming out with, and you can very seamlessly connect things through a UI.
So let's start with that is sometimes you hear terms where people mash together names of personas of admin and developer, and they think just because it's declarative, it must be developer or it must be admin. And because it's code, it must be developer.

Warren Walters:
Yeah. So it's funny you bring up those personas in the mashing admin and developer together, because as far as I know, it's called or it's rising to be called admineloper. I've heard that a couple of times [inaudible 00:04:25]-

Mike Gerholdt:
It makes me think of Jackalope. Have you ever heard of a Jackalope? It's a rabbit with weird horns.

Warren Walters:
Yeah, maybe that'll be their mask on it in a couple of weeks. Dream Forces around the corner.

Mike Gerholdt:
It is.

Warren Walters:
But yeah, so from my side, especially with the mentorship and what I like to do or a lot of what I do is to help people understand that there's not just one type of person anymore. Maybe years ago it was like that, but now it is very fruitful for you to understand all sides of the Salesforce. And this could be the configuration.
So knowing how to set things up and the fields and the whizzy wigs like you mentioned, but also the benefits of knowing some development things. Now, maybe you don't need to jump all the way in where you're writing custom integrations yourself, but to just understand those core fundamental concepts of development can really help you build out more complex solutions and communicate better with your teams.
And through mentorship, especially with a lot of admins, it's all about encouraging them and showing them different resources they can use to really understand some of the concepts that were traditionally a bit foreign to them or locked away in a separate area that's only for developers, which is not true anymore.

Mike Gerholdt:
They'll be developers, let's put that on the map. It's interesting because I think maybe, I'll go back 18 months ago before I had a really cognizant working awareness of AI. Learning code meant copy the snippet of code, find a developer friend and be like, what does this do?
Now, I put a validation rule into ChatGPT just to have it double check what I was doing. And it can tell you back, you can copy snippets of code into AI and have it tell you what it's doing. So I have to believe that some of that acceleration for admins, just basic understanding of code is a little bit greater now that we have some tools like that, right?

Warren Walters:
Yeah, it's really been an explosion of what tools we have at our availability to help us understand it a lot better. In the past, we had maybe things like Stack Overflow and different websites you could go to, or if you were taking it back, you have to buy a book or something and try to read it. And that barrier to entry-

Mike Gerholdt:
The library.

Warren Walters:
That barrier to entry really stopped a lot of people from diving in and understanding certain things that were going on in Salesforce development and in code. But now with those other types of tools and even the tools that Salesforce is releasing, we're able to more easily understand different code and formula fields.
Even our flows now, we're starting to be able to just reduce all of the headache and all of the additional knowledge that you needed to have to be able to work with those particular items. Now, there are some benefits of going, getting that deeper understanding, really learning the fundamentals and branching out further into programming concepts.
But at least to get you started, get your feet wet, these AI tools have been really great for helping people get some encouragement and seeing if they're on the right path and getting more, down to complex questions where you're saying, all right, you needed to go to a developer friend to get that looked up.
You might come with a more refined question now that you're using AI instead of just, here's the code, help me out. It's, I have this particular piece of code, it should do this. How does this look to you? Is it best practice? So the conversations are shifting a little bit more.

Mike Gerholdt:
Plus also just disseminating some of the code that admins would look at, it's not foreign into, I don't know what this does, pages and pages of stuff. I can at least copy it and maybe have AI give me an idea of where to start.

Warren Walters:
Yeah, that's funny too where the starting piece, just because it's really about what it gives you. So in certain aspects you have to be a little bit careful of AI because of it could produce code in a different language other than Apex, you get Python code.
And if you don't know those fundamentals, it can really set you down maybe a rabbit hole or not be as helpful as you think. So it's a word of caution to a lot of my mentees. I definitely want them to use it, but make sure that you're still doing that due diligence to understand some of the basics of it.

Mike Gerholdt:
If you're having it generate code for you, I think I'm in the translation part of the world. So let's start there though with mentorship, what comes up most in the mentorship and in mentees that you work with?

Warren Walters:
Certifications is always a big topic. What search should they get and what should they focus on? What's next? So I think that one is really fun. And another big one is a lot of encouragement, especially for administrators that want to start to look in and dabble with code.
A lot of people here, they have this perception that, oh, it's for the geniuses or only people that go to university, which is not true at all. I've met many, many developers that could code me into a box that have never gone to school, have just learned by themselves, and they're very passionate problem solvers and they really stick with that craft.
So a lot of what I do is encouragement and then giving people resources for, if you're trying to learn integrations, start with either this Trailhead module or this specific article and bring it back to me and let's see if we can figure it out together.

Mike Gerholdt:
Do you find when individuals are coming into the ecosystem maybe with a coding background, that it's less obvious for them to pay attention to some of the declarative tools that are already built in Salesforce?
Or is it intuitive to have them under... Is it natural to just look at everything first and then only go to code as a solution, or do they see everything's a nail and they've got a hammer and I'm going to code them into a box, as you said?

Warren Walters:
Yeah, it definitely starts out as everything is a nail and code is the hammer. It's funny because if you're in a lot of different orgs, especially when I was doing consulting, I got into a few orgs that had code written for very simple things that you can do in configuration, like creating a validation rule or sending an email, that kind of stuff. Just tons and tons and lines of code that were not necessary.
But whoever got in there first, their mindset was, okay, I know how to code, let me just stick with that. So a lot of people that I talk with and mentor, especially if they have a coding background there, that's their first idea and that's one of the things that I have to educate them on, is Salesforce has so many different tools at your disposal.
It's better to at least be familiar with everything that's available, like flows and the formula fields, and even just simple things like knowing how a lookup field works, especially if you're not coming from this sort of space, it can be a little confusing to understand what it is and how it works.
So I generally recommend going on that journey of starting at the beginning, especially hitting a lot of those beginner admin trails where you can learn the fundamentals and work your way up into a good spot of understanding all the tools that are available and then you can jump into code. The code wall, always be there. There's plenty of reasons to use it, but you want to use the right tool for the right situation.

Mike Gerholdt:
And it's also, I have to think of just best use of your time. You could code escalation rules, you could code a workflow, but flow leaves you with an artifact that's easily upgradable and reproducible as opposed to something custom that, who knows, maybe something 10 releases down the line, Salesforce is going to change and now you might have to rebuild that Apex code.

Warren Walters:
Yeah, that's a big point, especially in consulting that you have to think about because a lot of times you may not be there one year later, two years later just because the contract or the project is ending.
So designing for the team that is going to be there is very important. If you're going to leave a ton of code only with a team of admins, and that may not be the best solution for you.
Or there might be a little bit of in-between where you can build out the complex pieces inside of code, but also leave the administrative side or leave the ability for the administrative side to have configuration or custom settings that can manipulate the code.
All things like that are things that you need to start to think about when you look at the longevity of your code and the maintainability.

Mike Gerholdt:
Do people that you work with and start to work with, when they come into the ecosystem, do they know their path? Are they looking at consulting or being a developer first? Or is it just eyes wide open, help me figure something out, Warren?

Warren Walters:
A lot of it is eyes wide open. Lots of existing admins know that the developer path is out there, but people just starting out often they hear about development from other tech stacks and they know that it's out there, but it's hard to understand where should I be going? What should I be looking at?
So there's a lot of education that goes on and there are so many different opportunities in Salesforce. So you need to try to find... Or I recommend trying out a bunch of things, but especially if maybe you have a background in project management or system management like databases and things like that. Take a look at how that translates directly over into a Salesforce career.

Mike Gerholdt:
Yeah, no, that makes sense. Often you start off with an idea, and I've had a lot of friends too that were admins for a while and then they see that consulting dollar sign and they start chasing the money and obviously you can do that in any career. So that's interesting.
You mentioned something that I wanted to think a little bit about, which is the topics that admins and developers should think about. So I started a little bit dumped into the deep end with AI, but we have declarative side, we have the code side.
What is some of the stuff that admins and developers that you're mentoring aren't paying attention to and you're like, folks, the streetlight, the spotlight is on, you totally missed the sign on the side of the road. How did you blow past this exit kind of scenario?

Warren Walters:
That is really cool topic to bring up. I think a lot of it stems to one, everybody they know about AI, they probably are at least dabbling in it. If you're not dabbling in it, I would recommend at least looking at it. So that's one big piece.
But the other part is probably more, I want to say on the soft skills or it's really around communication, especially for a lot of introverted people. It may not seem like it, but I'm pretty introverted. But it's around how you can communicate effectively either with your boss or your teams or anybody that you're working with.
And that can be a huge valuable asset to you as an individual because it can help propel you into different types of roles that maybe somebody else that's lacking those skills or still working on those skills, they're not able to jump into what goes hand in hand with that is more personal branding as well.
So this is how you present yourself on LinkedIn, doing things like YouTube channels, having a blog and that can also propel you above the rest, especially in a competitive market. Having that awareness of where you're at and how you want to be presented to the outside world can be very important for a hiring manager to make a decision on.
So I recommend everybody working on a portfolio or having some sort of additional thing above the defaults of your resume and having a basic LinkedIn portfolio and that kind of stuff.

Mike Gerholdt:
Yeah, I'm so on board with everything you just said because I feel like for a lot of my career when I was an admin, not only was it just understanding the configuration, but for lack of a better phrase, I'll say it was selling the configuration, really communicating to the organization, no, no, no, no. I know how to do this and this is what's best for right now based on what you told me and confidently communicating that.
And then to your second point, showing up, I love it when people look like their profile pictures. It's so much because you look at, you think of how much you're online and when you see, especially with a coworker, your slack avatar all the time, and then you see them in person and they look the same, you're like, oh, I know I have the right person.
Because I've always joked that I'm an introvert, but I play an extrovert for work. I can summon up a solid eight or nine hours of extrovertness, but 5:30 at Dreamforce, the bell tolls, Mike is running down the stairs, glass slippers falling off, he's turning into a pumpkin. He really wants to get back to his hotel room and just have some quiet stare at the wall time.
But being able to show up and look familiar and then interact with people and that's how you network and that's how you get different ideas shared with everybody too.

Warren Walters:
I'm on board with that a hundred percent because at least for me, a lot of what you see online, a hundred percent of what you see online, I'm going to be the same exact way at a conference. As soon as you see me after I say hello, what is your name? I'm going to start spewing development and Salesforce right at you.
So I think that that is important though to be authentic wherever you're presenting yourself because it's going to take that toll on you, especially over time, especially if you're at working at a place where either you have to change yourself to do that. It's important to be at home as much as you can in where you work and how you're presenting yourself.

Mike Gerholdt:
Yeah, I mean for the longest time I wore a red shirt everywhere and it was very easy to spot Mike in the red shirt. So I had this question down, but in hearing you answer it, and I've done a million of these podcasts, I'm going to ask it to you different.
So one of the questions, and you probably get this too, is like, all right, so what is good places to start learning? I'm going to ask you that, but I'm going to give you the caveat of you can't say the word Trailhead.
And the reason I'm going to say that is, look, I work at Salesforce, Trailhead's table stakes. We all know to go there. Everybody in the community knows to go there. If you don't know to go there, you should go there. You're going to hear it at user groups. What are other places that you should go that are good places to learn in addition to Trailhead?

Warren Walters:
How much can I plug websites? How much is allowed? There are a few sites that I really love for either practicing Salesforce development or even Salesforce administration.
I'm a big YouTube person. If you've looked me up at all, I love video, that kind of stuff. So there are some really major channels on there that I definitely follow. So some of them are Apex hours on YouTube. There's Matt Gary's channel, which is also very focused on Salesforce development, so also look at those.
And then especially thinking more either when I'm studying for a certification or being more well-rounded, a lot of us know about Focus on Force, which is great. But what I like to do whenever I'm either taking exam or studying is, okay, maybe I'm doing some practice items, but I'm also actually building out the practice scenarios, maybe the exam question or something like that inside a Salesforce org so that I'm Retaining the knowledge a little bit better than just clicking through a few different examples. So this works really well for both administration and development.
Just recreate the scenario the best you can when you're working through those. On top of that, there are some really great, if you're looking to dive and learn development, really great sites for that. So there's free code camp org, which is more of HTML JavaScript, it's like web languages.
But like I've been mentioning, once you learn the fundamentals of development, you can transfer it around to any language and it will really help out in your configuration inside of Salesforce. So if you know how to do flows, either on the basic levels, if statement is an if statement, iterator, a loop is a loop in every different language.
So you're able to translate some of those a little bit easier once you know how they work under the hood. I'm trying to think of some other ones. I know there are a ton and maybe I can link some down in the show notes and stuff like that.

Mike Gerholdt:
I didn't mean to put you on the spot but to be honest with you, every time I ask a question I'm like, oh, go to Trailhead. It's like, where do you start? Well, what are you looking for?
Trailhead's been around I think almost 10 years to me now, it's to the point where it's like the help and FAQ part of a website. The first time that you saw a help or an FAQ on a website, you're like, oh, I wish every website had this. And to me, that feels table stakes. You should be able to do that.
But then to your point, there are things that you should learn like communication skills and presenting skills and personal branding skills, and some of that's on there, but there's also good sites and good places to go to learn stuff like that.
Last question, a little bit of a curve ball, but as a mentor, you've worked with a lot of people. What is one quality that is consistent across all of your mentees that seems to really drive their success?

Warren Walters:
I think one of the big ones is around persistence. Especially in the Salesforce space, configuration and development. I prescribed to a notion of, let me give you just enough so that you know where to look, you can be very dangerous. But not giving you everything to complete or solve challenges or whatever wacky idea that I've come up with at that point.
So knowing that there is a light at the end of the tunnel, there is a solution for every problem, especially in coding. We're not inventing anything new and if statement is an if statement, some of these things that we are creating have been studied and perfected over a long period of time.
So all you need to do is really find it and then use that solution and make that existing solution work for whatever your problem is. So understanding that idea of, okay, as long as I keep working at it, keep pushing, something will come from this that will put me in a better situation than I am currently, is really what I start to stress in a lot of the mentees that I work with.
I think it can get overwhelming to learn development and maybe you don't feel like you're making progress, but a lot of times it's about looking back and reflecting on how far you've come to see some of the progress that you've actually been doing, which is really cool. So I think that's a big one, right?
Persistence and then knowing when to ask questions may have come up before. But you're working on your own, you've found a lot of resources and you're going through and you end up getting stuck on one particular piece.
I think it's important once you are completely stuck and you've done as much research as you can, of course to reach out. And it's humbling because maybe years ago, I didn't like to ask for questions read. I was like, oh, I should know everything, or I should be able to figure this out on my own.
And I started progressing so much faster once I was able to say, all right, I've done enough research, I've looked at it, I'm going to ask a very educated question to somebody that has done this before, somebody who has been through whatever experience. It could be as small as making a formula field or as big as writing an integration to a third party system.

Mike Gerholdt:
Yeah, you're spot on. Persistence is right there. You said that answer educated question, and this actually came up I want to say about a month ago or so. I interviewed David who does Wordle and Sudoku on YouTube and TikTok, he rather he also does coding, which is interesting. I feel like maybe a lot of software engineers and developers do Wordle and Sudoku.
But I would rather, he said in working with team members would rather have a team member spend 10 minutes working through what they know to try and solve the problem and then come to me with a question as opposed to just immediately hitting a problem going, how do I do this? Throw your hands up.
And I think when I've worked with people too well, how would you work through this? Because you need to start putting those connections together because every time something like this happens, there isn't going to be a Warren behind you that you can just turn around and be like, now what do I do? So educated question. That was really good.
Warren, thanks for taking time out of your day and being persistent and mentoring people and being a part of the great Salesforce community.

Warren Walters:
Yeah, Mike, it's been a pleasure and an honor and I guess destiny to finally end up on the Salesforce Admin podcast. Super happy that I was able to make it out and spread the word about development. If you're scared about it, if you don't think it's for you, do not worry. I don't think it's for me, right?
Everybody thinks that just try to take it one step at a time or reach out to me. A lot of developers are very, very helpful in the Salesforce Ohana. So yeah, so happy that we finally made this happen.

Mike Gerholdt:
Thanks, Warren. So that was a fun discussion with Warren. I love the term educated question. Going back and really thinking through it makes me think of that podcast that I did with David or ranks on Sudoku and Wordle solving, which is thinking through what are all the possible ways I can solve this, exercising those, and then turning to my community and seeing how they can help me based on what I've done.
Because you might find a creative way of doing something, but I couldn't agree more, persistence, persistence, persistence. There is a light at the end of every tunnel, and I think his sight is very inspiring. I just pulled it up and the first thing it says, remember, I believe in you. So, thank you Warren for being on the podcast.
Now, if you enjoyed the episode, be sure to click that follow or subscribe button so that new episodes are downloaded. And of course, if you're looking for resources, folks write down below in the show notes. I'm going to link to anything that Warren mentioned, including his social profile.
But you can always find resources at admin.salesforce.com. That is your one stop for everything admin. Release information, more podcasts and a transcript of the show. Now be sure to join our conversation in the admin Trailblazer group.
That is, of course, on the Trailblazer community, and you know where to find the link for that. That's right. It's in the show notes on admin.salesforce.com. So with that, I hope you enjoyed this episode. I enjoyed it a lot. And until next week, I'll see you in the cloud.

 



Direct download: Why_Mentorship_is_Crucial_in_the_Salesforce_Ecosystem.mp3
Category:general -- posted at: 1:00am PDT

1