Contact Us

Episode 3: Why Your AI Strategy is Like Waze


In episode 3 of Hidden Layers Ron sits down with KUNGFU.AI's Chief Strategy Officer, Dr. Benjamin Herndon, and Senior AI Strategist Daniel Bruce, to discuss artificial intelligence strategy. They explore the differences between corporate and AI strategy, how to determine when to jump in, and how to stay up to date in a rapidly moving environment.

Ron Green: Welcome to hidden layers where we explore the tech and the people behind artificial intelligence. I'm your host, Ron Green, and I'm delighted to be joined today by two of my colleagues to discuss artificial intelligence strategy. Dr. Ben Herndon, chief strategy officer, has over 30 years of experience in organizational transformation and cognition. He's worked with organizations ranging from small startup to the US Internal Revenue Service where he was the first chief research and analytics officer. Ben holds a PhD in organizational cognition from the University of Texas at Austin McCombs School of Business and was a research professor at the Georgia Institute of Technology. In his most recent role, he was director of AI strategy for Vista Equity partner. He holds a BS in political science from Harvard University. Welcome Ben.

Dr. Benjamin Herndon: Thank you Ron.

Ron Green: And I'm also joined by my colleague Daniel Bruce, who has over 20 years of experience designing and implementing state of the art solutions using machine learning, computer vision and natural language processing. Daniel specializes in designing solutions for drawing rich insights out of unstructured data. He's also former CTO for a technology consulting firm and a product lead for a computer vision SaaS startup. Daniel holds a ba in mathematics from the University of Florida and a master's in computer science from Cornell. Welcome Daniel.

Daniel Bruce: Thanks Ron.

Ron Green: All right, let's get things kicked off by talking about why is it even important at all for businesses to have an AI strategy?

Dr. Benjamin Herndon: Yeah, that's a great question, Ron. The reality is I don't think any company, or organization for that matter, can afford to keep a blind eye to the impact that AI is going to have on either their industry, their sector, or the economy at large. I think everyone realizes this is probably the most impactful innovation of the last century. And if a company is not in a situation where its stakeholders are currently asking, what are you doing about AI? They are going to be soon. And the answer cannot be, I haven't thought about it, or we don't know. Even if the answer is not right now for very good reasons, the answer still needs to be, we've at least thought about it. Now, I would argue that's probably an increasingly small minority of organizations. More often than not, I think companies really need to start getting their head around what the implications of AI are for their market, for their sector, of the economy, for the way they do business, their operations. So I think it's an imperative.

Ron Green: Honestly, Daniel, what's unique about AI strategy from other types of corporate strategy?

Daniel Bruce: Yeah, I think it's a great question as well. And certainly AI strategy is another component of corporate. It's not. I agree with what Ben said, I don't think AI can really be seen as sort of a standalone or optional, but AI is a space that's changing much more rapidly than your typical other components. And so building an AI strategy kind of has this feel of trying to hit a moving target in ways that a lot of corporate strategy doesn't. So where there are sort of key components of corporate strategy that don't change nearly as frequently, the way that that's actually executed changes at almost a frenetic pace. And it leads a lot of companies to feel almost paralyzed thinking about what do we do when we can't miss out. But at the same time, that technology is changing so fast that it's hard to know, where do you jump on the bus? Where do you get started?

Ron Green: All right. With the technology moving so quickly, how do you ensure that if you're putting together an AI strategy today, that it remains flexible enough as markets shift and as technologies change and mature?

Daniel Bruce: Yeah. So I think, first of all, that is really hard to do well. I think it's almost impossible to do perfectly because you almost need to have a crystal ball to be able to predict where the technology is going. I think there are certainly some fundamentals that don't change nearly as much. And so companies will never regret having a strong data foundation in place, having some of those fundamentals that are going to be basically evergreen, regardless of how you end up using that data. And then I think making sure that you don't get too attached to a particular partner or to a particular piece of technology and don't get too entrenched because you blink. And a month from now, three months from now, the entire world changes. I find companies that get overly entrenched around a particular application of AI tend to regret that.

Ron Green: Okay. Oh, fascinating.

Daniel Bruce: Yeah.

Dr. Benjamin Herndon: I would add to that, that don't focus too much on the technologies. Right. Because the technologies are changing so fast and people tend to get very distracted by the changes in the tech, and then we'll tend to look, it's like a hammer looking for a nail. Right. Where can I apply this technology? If you focus on the capabilities, what is it about the organization that enables us to do AI well? Then those will adapt and flow as the technology changes. Also, back to foundational fundamentals. Always bring it back to your core value levers. What is it that your company does well and is already distinguished by in the marketplace? And make sure that your AI is closely aligned to those.

Ron Green: Terrific. Given how quickly the market and technology is changing, how frequently should AI roadmaps be revisited.

Daniel Bruce: In a word, very frequently, almost painfully frequently. So to the point, AI strategy has this feel of if you take too long building it, by the time you get to the end of building it, it's already obsolete. And that can feel painful and almost frustrating, just the pace that it changes. But in general, a lot of those changes and a lot of the things that push strategy to need to be revisited come from external sources. And so you won't have to wonder, like, do we need to go back and revisit our AI strategy? You're going to see the latest announcement from OpenAI or Microsoft or whoever it.

Ron Green: May be, or maybe even your competitors.

Daniel Bruce: Or your competitors, and realize, shoot, we just got leapfrogged, we've got to back up, we've got to rethink this. And so I think, in short, it has to be visited very often. To Ben's point, I think the fundamental value levers don't change nearly as frequently. And so it's really a matter of not redoing your entire AI strategy, but going back, analyzing what's changed, going back to those fundamental levers, and then seeing what do we need to adjust to stay in alignment with where the landscape is today.

Dr. Benjamin Herndon: Yeah, I think to add to that, I think the use of the term roadmap still makes sense when you're talking about AI. But I think the way we have conceptualized and applied roadmaps in the past is more about move to point a, then move to point b, then move to point c. And this is more fluid than that. Right. So you don't really get to stop and implement and then look around and then adjust the roadmap. It's more like the flow of a river. The only thing you can do is make sure it's continually aligned to the fundamentals and the things you set out as guideposts.

Daniel Bruce: Yeah. An analogy I'll use often is it's kind of the difference between old school rand McNally maps, where you could create a sort of paper map and sort of chart out the optimal path. And AI has much more of the feel of Google maps or ways where traffic patterns are changing, real time traffic changes. What that plan looked like when you left your house and what it looks like five minutes later is different. So you kind of have to constantly reassess. Your goal doesn't change, but the way that you get there changes very frequently.

Ron Green: I want to steal that analogy.

Daniel Bruce: I love that.

Ron Green: Okay, so getting executive buy in is critical to any type of strategy, but there can be challenges to this stage with AI moving so quickly, what are some ways to overcome that and get alignment on the executive level?

Dr. Benjamin Herndon: I think the biggest key is patience. Right. Because for this and for executives understanding that most of them probably are not native to the age of AI, they are trying to get their heads around this in a way that they're comfortable feeling like they know enough to and they trust enough to say they're bought in, and they're not willing to do that blindly. But getting there is not necessarily a linear path. Right. So they will go through processes where they internalize a hypothesis. That's, here's how I need to think about AI. And then they'll read it back to you and you'll kind of help them modify it. But throughout this process, you're just patiently kind of helping them wrap their head around this. Someone once said that it's sort of like being a therapist.

Daniel Bruce: Yeah, absolutely. And I think part of what makes that challenging, too, is just within an organization, it's almost like there's five, six, seven different languages being spoken around this. So you've got data people that are talking in data terms. You've got AI people and machine learning folks that are speaking a different language. You have UX and product folks that are speaking different languages, got business folks that are thinking differently. And so finding sort of a lingua franca that can be used to make sure that what executives are caring about and how that gets sort of built and ideated on a technology team becomes really critical for building alignment. Yeah.

Dr. Benjamin Herndon: I think it's further complicated by the fact that nobody really knows where this sits in the organization. Most previous evolutions in corporate strategy, it's been fairly clear, okay, this belongs in the IT department or the tech department. This belongs in product. This sort of belongs everywhere and nowhere at the same time. So figuring out buy in is looking at eight to 20 different agendas across the organization and really trying to make sure that they're each understanding what this means to them.

Ron Green: Okay, so that's a perfect segue to my next question, which is, so how do you integrate AI strategies into the broader set of initiatives and strategies within organization?

Dr. Benjamin Herndon: I would say that they should be hopelessly intertwined. Right. What AI is doing is really changing the way that your strategy goes to market. So before, maybe your corporate strategy was, we're going to be the lowest price competitor or the highest value offering in the marketplace. Those things don't necessarily change. It's just, what does that mean in terms of how we leverage AI to accomplish those things now? And that's what I mean by alignment to value levers. So we have to be crystal clear about what got the company to this place. What's the secret sauce? And the AI should reinforce every aspect of that. So I don't see them as fundamentally different. Where they will deviate a little bit, at least for now, would be maybe around m a. But I think we're going to increasingly see m a focused on the acquisition of AI capabilities.

Ron Green: We're going to cut this out because that's awesome. That was so awesome. All right. That actually got me thinking. All right, so edit that part out.

Dr. Benjamin Herndon: No, keep in the damn.

Daniel Bruce: Yeah.

Ron Green: All right. So Ben, another question for you. What are some of the biggest challenges companies face when developing and deploying AI within their own organizations?

Dr. Benjamin Herndon: I typically say that it's a lack of appreciation for the complexities involved beyond the engineering. And again, this kind of goes back to the idea of don't focus too much on the technology. We tend to get very excited about what the technology can do. Can we make it work with our data? Can we add it to our service offering or our operations or whatever it is? But that tends to be to the exclusion of everything else that needs to be considered when you're talking about AI. And this goes back to everything from Uiux. Right. How are people going to interact with this at the product level? How is this being governed from an ethics and biases perspective? How do we monitor the AI once it's out in the field and maybe being exposed to changing data landscapes that make it less effective over time? There is an endless constellation of stuff that has to happen for AI to succeed, and that's usually why it fails.

Ron Green: Yeah, well, I see the same thing on sort of the technical implementation side, which is if you just focus on the modeling, if you just focus on the technical capabilities of the AI development, and you don't pull in the product team, or you do pull in the product team, but you don't pull in the deployment team and DevOps it almost never ends well. You almost invariably wake up and realize that you've made critical mistakes that will probably delay production by like six months.

Dr. Benjamin Herndon: Yeah, I think one of the other keys is a lack of alignment or agreement internally and with partners around what success looks like. We get so focused on the, can we do this that we forget to ask, okay, well, how are we going to know if the juice was worth the squeeze? And particularly as we look at making subsequent or incremental ongoing investments to either expand the AI or continue to support it? So we've got to look way beyond the. It works, it's alive. Let's put it out in the marketplace and think about, okay, what is this going to look like from moving the needle so that we know we can look back and say, that was a win?

Daniel Bruce: Yeah, I think that's such a good point, too. And I think one of the patterns that we see is that companies that are early in their AI adoption, the muscles and the sort of muscle memory that you build, creating that first AI solution tends to focus so much on the technology. So understanding the data and the tools that you use, and there's this moment where you realize, we did it, we built it, we got something out. And that's amazing. But that same muscle memory does not translate well to actual valuable solutions for the long term because the muscle memory that you have to build for those tends to focus more on should we build, like Ben was saying, are we building the right things? Are these things actually moving the right needles for our users, for our business? How do we make sure that this thing that worked great in our test environment actually works in production? And I would say that some of that is typical for technology. There's that old 90 ten rule for technology, that so much of the work takes place after the initial development. But it's so much more true in the case of AI, where something that worked today, not because of technology factors, but because of data factors or customer factors, could be obsolete or even in some cases dangerous because of the way that the underlying data shifts. And so building out that capability tends to be very hard. And there's sort of a mental shift that what it took to get to step one and what it takes to get to step two are very, very different. So you kind of have to unlearn some of those habits, and that's a little bit unnatural. Yeah.

Dr. Benjamin Herndon: I think it's also worth noting that it's very natural for people to internalize the insane pace of AI development as a sense of urgency. And while that's not to say that there isn't a first mover advantage in this marketplace, but if we focus too much on addressing and responding to that urgency, you're going to put models into production and in the marketplace that are potentially even risky, maybe they're truly biased and they're having harmful impacts on people. Right. We've seen this with some of the earlier LLM releases. I think when we look at Uiux, we have so many examples of the model works. It's going to be great, but people don't understand it. They don't trust it. And in fact, it hurts adoption and it hurts the product's receptivity.

Daniel Bruce: And I think part of what's challenging about that is, Ben said earlier, I think just defining success is such a key part of this, because in that early stage, success is just getting something to work. And then there's this moment when you get it to work and you realize it hasn't moved the needle. Or worse yet, it actually harmed our business. And that's when reality sets in. That's when business folks, or customers or product folks tend to step in and say, like, if this is winning, if this is success, what would failure look like? Right? And that's the moment where we find a lot of customers. And frankly, we love working with people like that. But those are hard hurdles to get over, because, again, it's not just about what can be done. It's not a technical problem. At that moment, the impetus tends to shift towards business folks and ux folks and product folks to think, how do we do this in an effective way?

Dr. Benjamin Herndon: Yeah, it's really a slow down to speed up kind of environment.

Ron Green: Okay, so that's fascinating. So most companies in the world have barely begun taking those first steps into using AI in any parts of their business, aside from probably maybe chat, GPT, and a lot of businesses I know are blocking that at the firewall. But as businesses get serious about adopting AI, is there one or two or three different things that they might want to look at now to smooth that transition in the future?

Dr. Benjamin Herndon: I think the best, particularly if you haven't taken any kinds of steps, I think the key thing is start thinking about your network, right? Putting together collection of advisors, whether it's people on the board who maybe understand this space, partner companies like ours who maybe you hire to do a strategy engagement before you do engineering. Maybe it's all kinds of coalitions out there, out of academia that can be used to kind of advise how much should we be thinking about XYZ? How much should we be concerned about XYZ? And again, I think one of the core things is don't rely too much on technologists, but go back to your strategy. Right. So try to project, what is this going to mean for our company, our business in the future? And then that's going to tell you kind of, okay, here's where we really need to be experimenting. Here's where we really need to be more aggressive about having the conversation, even if we're not deploying anything yet.

Daniel Bruce: Yeah, I totally agree with that. I'd also say for technology teams, in particular, nobody has ever regretted having too much quality data. And so, for technology teams, if you find yourself in a situation of we don't know what AI thing we should build, focusing on getting quality data sets you up really well for whatever the next LLM or multimodal LLM or whatever the latest and greatest is going to be, never regret having that data available.

Dr. Benjamin Herndon: And I think you maybe intentionally hope, I'm sure, intentionally said it's about getting the data right. All too often we think, well, can I do AI? Is a question of the data I already have. Whereas for most of us it might mean, hey, for what I want to do in the future, I need to start collecting this data and I need to start doing it today so that in three to five years, I have the data set that I need to start building the AI. That is critical, I think, to this idea of never regretting having your data house in order.

Ron Green: Yeah, so true. So true. Well, let's end on a light personal note. I want each of you to tell me, if you could have AI right now, automate any part of your life, what would you pick?

Dr. Benjamin Herndon: I'm going to let Daniel go first on this.

Daniel Bruce: Oh, man. So that's a hard choice. Too many things to choose. I think I would probably have a robot that can do laundry. That's one of the most necessary and most painful parts of my life.

Dr. Benjamin Herndon: I would clone me and Daniel. So impress us as AI's because we are getting the work faster than we can deliver it.

Ron Green: That is so true. I see that every day. You guys are unbelievably in demand. But this was terrific. Thank you both so much for joining me. I just had a blast and I really appreciate the insight.

Daniel Bruce: Absolutely.

Dr. Benjamin Herndon: It's been a pleasure. Thanks, Ron.