Contact Us

Episode 15: Inside AI’s Golden Age with Stephen Straus

In our newest episode, Ron Green has a candid conversation with his Co-founder and Managing Director of KUNGFU.AI, Stephen Straus.

In this episode you will:

-Get insights into their journey into AI and why they opted for services over products. Learn about the pivotal decisions that shaped our company's trajectory.

-Gain a deeper understanding of Generative AI's role in today's transformative landscape. Ron and Stephen delve into its implications across various domains, from computer vision to natural language processing.

-Explore the significance of culture, psychological safety, and trust in fostering innovation. Discover how these essential elements contribute to KUNGFU.AI's success.

-Understand why ethical considerations are at the forefront of their AI ventures. Hear about their decision to turn down the first potential deal and the ethical principles guiding their company through industry challenges.

-Learn how companies can navigate technological advancements while making meaningful contributions to society.

Ron and Stephen end the episode by reflecting on whether we're experiencing the "golden age" of AI, drawing parallels to the early days of the internet.

Ron Green: Welcome to Hidden Layers, where we explore the people and the tech behind artificial intelligence. I'm your host, Ron Green, and I'm delighted to have my Co-founder and Managing Director of KUNGFU.AI joining me today, Stephen Straus. Stephen is a visionary entrepreneur whose career spans successful startups, venture investing, and important work in the nonprofit sector where he applies his entrepreneurial spirit to tackle pressing social issues. Today, we're doing a deep dive into Stephen's perspectives on the ever-evolving landscape within AI, the importance of ethics and psychological safety when building high-functioning teams, and his dedication to making AI and its future inclusive for everyone. Stephen is a serial entrepreneur and former venture capitalist. He was a general partner at Austin Ventures during the dot -com era, which was then the largest venture capital firm between the two coasts. Stephen foresaw that AI would be the most significant technology development in our lifetime and co -founded KUNGFU.AI, a management consulting and engineering firm focused exclusively on artificial intelligence in 2017. He's a graduate of Colgate University and received his MBA from Harvard Business School. Stephen lives in Austin with his terrific wife, Tina, and they have four great kids. Steven, thank you for joining me today.

Stephen Straus: Well, thanks for having me.

Ron Green: I've been looking forward to this conversation for quite a while.

Stephen Straus: As have I. All right.

Ron Green: So we've been running this company, KUNGFU.AI, for over six years now. I blame you for the whole thing. It was really all your idea. Let's go back to the beginning. Back in 2016, way before the AI wave, you really foresaw it coming. What was it that attracted you to AI? What made you so confident that this was going to be important?

Stephen Straus: Well, that's a good question. And I really have no idea exactly what it was. But as I was thinking what I wanted to do next, as I was winding down my involvement in my previous startup, for some reason, I got conviction that AI was going to be the next big wave.

Stephen Straus: And I had lived through and had really a front row seat to the internet revolution during the 90s when I was at venture capital firm that I was at. And for some reason, kind of my pattern matching brain kicked in. And I thought that I really got conviction around the fact that AI was going to be the next large wave. McKinsey and others say that this is the largest technical shift that any of us will live through in any of our lifetimes. And back then, long before people were saying that, I thought this is what I wanted to go do next. And I thought it was actually going to come much sooner than it ended up taking because now we're in 2024. So eight years later when I was thinking about this, seven and eight years later. But initially, I thought about building a product company in the AI space. And I'll tell that story. That's actually how you and I met.

Ron Green: Yeah, let me tee that up because I was kind of curious about that as well. So we met, we both have deep product backgrounds. We were both looking to do something in the AI space as our next ventures.

Stephen Straus: In the product space.

Ron Green: In the product space and within AI. And you became convinced that the timing wasn't great for a product company and convinced me rather, well, that services was the right place. What was your thought process there?

Stephen Straus: Yeah, so initially I was thinking about a whole range of product ideas. And what I was finding was that the ideas and the due diligence, I had my venture hat on and I was thinking about due diligence on these ideas. And they just weren't converging in my mind that there were going to be good opportunities. And a lot of it came down to the technical risk. So back then the technology was moving fast. It's moving even faster now. And I really worried that one random Tuesday, we would be a successful company. We'd raised a bunch of money. We were a leader in our product category. And we'd get blindsided by some kind of technical shift. And our technical asset would overnight turn into technical debt. And when I was in the venture world serving on people's boards of companies that I invested in, I was in those situations. And those are very painful board meetings. And it's hard to pivot through those situations. And then there were a variety of other things that I was thinking about. One of the key other ones was that if the product, if I were to build a product company and it needed the data of its clients, then I would be in the services business because you'd have to go see if they had the data. You'd have to go see if you could get it. You'd have to get it. You'd have to clean it up. You'd have to ingest it, train your model on it. And that's all before you could go on to the next sales opportunity. Well, that is not a repeatable sales model that's scalable like venture firms want for product companies. And again, I've also been in those board meetings where the company doesn't have the opportunity to grow and scale more than literally like you can with a services play. And so when I started thinking about that, those two things together among other things, those negatives in a product context are actually positives in a services context. And so I really switched my thinking to starting a services company, which is something I had done before. So this is my sixth startup, but second professional services firm. And when you and I met, I was ahead of you in that thinking and you were like, I'm going to build a product company. And I said, I was thinking about the same thing, but here are the reasons why I decided not to.

Ron Green: I know. I like to joke that it frustrates me, but you are absolutely correct about all of that. So, and we've seen that over the last, you know, six plus years. I know a dozen plus companies that they put millions into product development and then they were undercut by some advancement. And I know of maybe four or five that were undercut just last year by ChatGPT.

Stephen Straus: Yeah. And I know probably the same number personally, but then I also think about those charts that the investment bankers put together that have like 100 logos on one piece of paper and all the different categories of AI product companies. And I just think so many of those companies have been, they've seen better days and they're not going to, many are not going to recover. I think billions of dollars have already been lost in the AI space and many, many more will be lost. And I'm just glad that we're running a bootstrap services company where the more change that happens, the more we can be helpful to our clients as opposed to getting eclipsed.

Ron Green: Yeah, it's really true. And the faster things move, and if anything they are accelerating, you know, the more challenging it is for all of us to stay on top of things and make sure that we're building products that are state-of-the-art.

Stephen Straus: Yeah, and it's also really fun and exciting and gratifying to be able to help our clients get competitive advantage in this fast -moving space, which is what you and I set out to do from the very beginning and the whole company is focused on.

Ron Green: Right. So, you know, most people out there are aware of AI at this point. In fact, they may feel a little bit saturated with how much talk there is of artificial intelligence today. But I think the vast majority of people really only know about maybe ChatGPT, maybe a couple other things. And they think generative AI is sort of really synonymous with AI, but it's such a more vast landscape out there. Do you have any thoughts on that and how companies can sort of avoid having blinders around generative AI?

Stephen Straus: Yeah, I'd be happy to talk about that. I'm going to go back to the internet era as a guide for this because there's definitely not perfect parallels between the transition that happened as we entered into the age of the internet and as we enter the age of AI, but there are lessons to be learned. And so, when I think back to the 96, 97, 98 time period with the internet, people's imagination for what you could do with the internet was, hey, I can replace my brochure. I have a homepage now, so I don't need to reprint my brochure. And I have email through AOL, and that's the internet. And, oh, you know what? I'm going to add a shopping cart to my website, and we're going to sell something. So, we're an e-commerce company. And so, those are very narrow point examples of what the internet is, but no one would say those are the internet. And now, 25 years later, we have a very broad, rich, and deep understanding of what the internet is and can do, and it's still evolving, obviously. So, with that as context, I feel like we're in the same place as it relates to AI. And so, people are thinking of ChatGPT and large language models and the ability to do generative AI for what you can do now, which is, as an example, type into a prompt and get a result back as AI. But the way I think about it is that AI is a large and rich... There's a whole tool set in a toolbox under this heading of AI, and ChatGPT and large language models are definitely the newest tool in that toolbox. They're definitely the shiniest tool in that toolbox, but they're also the least mature and maybe not the least likely to bring enterprise value and ROI to companies today, but they are definitely not in the top tools. And so, I think what will happen over the next couple of years is people and companies will come to realize that the toolbox is really quite broad in what you can do and lots of other capabilities for developing ROI. And what I would say also, Ron, is that people know this, but they haven't really necessarily put two and two together. On our phones, as an example, with social media, our feeds are an example of very advanced predictive analytics, right? They're doing incredible matching of what my preferences are with other people and they're teeing up content that is statistically likely to be very interesting to me. And when we see the front page of an e-commerce site, we see things that we didn't even know we wanted, but we're like, oh my gosh, I want that. It's the same kind of AI driven predictive analytics based on their profile of me. And so, we intuitively know that there's a lot more tools out there just from our everyday experiences, any kind of self-driving car or being able to talk to your phone and have a speech to text. And these are all examples of AI and they all have enterprise wide applicability.

Ron Green: I think generative AI as a whole probably constitutes maybe less than 25% of the type of projects we're doing. We have a dozen plus projects we're working on at any point and they're broadly grouped into computer vision, natural language processing, predictive analytics, et cetera. I couldn't agree more as important and as impressive as some of the advances within generative AI have been. Applying them to the enterprise is still really challenging at this point and I think a lot of companies may be disheartened if that's their first step into AI.

Stephen Straus: Well, yes, but we all can get personal advantage out of it by helping craft emails and any other things that we have to write. And of course, marketing and customer service with chatbots, et cetera, can get...

Ron Green: As long as there's a human in the loop.

Stephen Straus: Yeah, that's right. You can get enterprise value, but it's very clear that there's broad and deep capabilities and lots and lots of use cases across all functional areas in the enterprise and also across most if not all industries at this point.

Ron Green: And to be clear, you know, I think some of the shortcomings we're seeing right now within generative AI we're going to overcome soon, certainly within, I think, the next two to three years. Absolutely. So I want to pivot a little bit. Culture has been a real important touchstone for you and for me from the get-go at KUNGFU.AI. And I think that it is really one of our differentiators. But, you know, there's this sort of say-do gap typically within, I find within companies where companies will talk about how important culture is and then talk about how important their people are and then work them eight hours a week, right, and just grind them down, right? And there's one anecdote that I just thought was amazing. So we have AI for good initiatives at KUNGFU.AI where we give our time to nonprofits. And I remember when we were starting the company and we're just crazy busy and we're trying to get things off the ground. And I had pushed back a little bit on saying something effective, do we have time for this? And I'll never forget, you said, Ron, if we don't have time now for nonprofits when we're starting the company, we'll never have time. We've got to lean into this. And that really blew my mind and you were so right. So could you just speak a little bit to your experiences developing the culture at KUNGFU.AI, of which I'm so proud, and why that's so important to high-functioning teams?

Stephen Straus: Sure. Well, there's I think two parts to that question. There's this culture and then there's giving back. So I'll start with giving back and then I'll talk about culture. As we enter this age of AI, the people at our company, myself included, are among the most well-positioned people on the planet because the whole world is shifting and we're at the leading edge of that. And I have a strong belief that people who have kind of to whom much is given, they have much responsibility. And so as we enter this new age, we get, because we are at the leading edge of it, we are in essence leaders in this space, even if we don't want to be. And what we do collectively, the people who are starting in this age, we get to define what this looks like and at least the direction that it goes in. And the thing that weighs on me is that every technology throughout human history has been used for both good and ill. Like you can go back to the discovery of fire. Well, fire has been unbelievably important, right? You can cook things, you can warm yourself, et cetera, but you can also be an arsonist and you can burn down house. And so every technology ever invented by humans, by people have been used for good and ill. And AI is arguably the most powerful technology that we have ever developed. And so it has incredible opportunities for great good, but also for really bad stuff. We're already seeing it. I mentioned social media feeds just a moment ago. The second and third and fourth order effects of social media companies trying to make more money is that our kids, the mental health of our kids and lots of adults are not good. Our democracy and democracies around the world are challenged because of the algorithmic feeds in our social media. I could go on. But we're just getting started in the age of AI and we're already seeing these really bad consequences. And so that was some of my thinking going into all this. Now, as far as AI for good and our initiatives there, I'm actually continued to be disappointed that we haven't really figured out what we should be doing to have as large of impact as I'd like to have. And so I'm very proud of the work that we've done, but as you know, I'm still struggling. I think we're still struggling to figure out what's the best use of our time and efforts and energies to try to give back. But if anyone's listening to this and wants to collaborate on that, you can think of this as like a bat signal into the sky and you can respond to it. But I am proud of the work that we've done and I really want to figure out what it is that we can do to have as big of an impact as possible to make the world the kind of future we want to live in.

Ron Green: Right, right. I want to go back to the second part of that question around the company culture. So, you know, without question, I think the culture we have at KUNGFU.AI is one of our differentiators. The people are just amazingly smart and talented, but they're just also just some of the nicest, kindest people you would ever meet and want to work with. I think that that culture has really differentiated our ability to deliver good work for our clients. Do you have any advice for companies that are trying to maybe hire their own AI teams, right? Yeah. This, you know, very specialized talent, how they can go about finding and maintaining the type of talents that we've found and maintained.

Stephen Straus: Yeah. So I want to back up and give credit where credit is due in that when you and I started this company with our two other co-founders, one of those co-founders, Steve Meier, said to me early on, he's like, Stephen, we really need to be thinking about our company culture and what we want that to be. And I remember just thinking like I had like a blank stare on my face and I'm like, I really don't know what you're talking about. And I look back on that and I think, as I said, this is my sixth startup. I never once thought and I say this with kind of embarrassment and shame. I never once thought about being intentional about company culture and the company cultures that we ended up having were just happenstance. They weren't shaped by intentionality, etc. And this is by far the best place I've ever worked and the culture is by far the best culture I've ever worked in. And I have really enjoyed learning about culture and have really enjoyed helping shape it here. And so to answer your question at the highest level, the answer is to be intentional. How do you build a culture that will attract and retain and allow world-class machine learning engineers to do cutting edge work? It's to be intentional about it. Now, where do you kind of go from there? So double clicking. My view is that to do leading edge work, you have to be willing to fail. Because if you're trying something new for the first time, you are going to make mistakes. And so how do you help people be comfortable trying something that is not likely or necessary, it's not that it's not likely, but it's not certain to work. You have to build an environment where they can say, I failed, I tried this and it didn't work. And ideally, before that, they would say, I'm not sure if I know how to do this. Can I get some help? Maybe someone else can give me help. And the opposite of that is arrogance and feeling like you have to pretend or bluff your way through things. Those things don't let you do cutting edge work, or at least not for very long. And how do you set the conditions for being able to say, I don't know what I'm doing, I need help, I failed, what are we going to do next? Well, you have to be willing to be vulnerable. And vulnerability is just a critical component of this. And we're not like, especially people at my advanced age and your advanced age, we entered the workforce where it was not common to be vulnerable at all. That's absolutely true. You did not talk about your personal life, the things that you're struggling with, et cetera. And shout out to Brene Brown, who's a vulnerability researcher. But when I found Brene Brown, and I just consumed as much as her podcasts and books, et cetera, as I could, and learned a ton about all this. And it led me to learn about a woman named Amy Edmondson, who I've had a chance to meet at this point, who wrote the book, The Fearless Organization. And she makes the strong argument that the preconditions for being able to have a culture where you can be vulnerable is to have a culture of psychological safety. And so as I put this whole puzzle together, it's if you have a culture of psychological safety, you can be vulnerable, you can build a team that's willing to ask for help, take risks, be willing to fail, celebrate those failures. And with that, then you can do cutting edge AI. And so in my mind, there's this incredible irony that to do this incredibly technical work, you also have to have a strong emphasis on these very soft concepts and these soft skills that are all around emotional intelligence. And so a long way of saying to the advice I'd give to people who are trying to build AI teams, you really have to make sure that you have a culture across the company, not just within a small department, that is able to support them. Because otherwise, it is very challenging to do this kind of work and be able to have success.

Ron Green: You know, I couldn't agree more. And we've talked about this so many times. The one of the key things that I see that I think really makes a difference is when you have senior experienced leaders within the team, let's say within engineering that are willing to raise their hands and say, I got stuck, or I don't understand something, and they need help. And if you have that sort of top down vulnerability, that's how you infuse it throughout your entire culture.

Stephen Straus: Well, I definitely agree with you. I have come to realize that there is a lot of, you know, whoever sits in my seat, right? And I'm in the managing director slash CEO seat, but, you know, whoever sits in my seat has to be, you know, a leader in establishing and defining and role modeling the culture. And to stay with that, you know, people will certainly listen to you, but they were much more likely to watch your actions. And so I actively think about where I spend my time, what meetings I show up to, and what I focus on in those things, because, again, people watch what you do much more than they listen to what you say.

Ron Green: I want to go back to the early stage of the company and talk about some of the decisions we made early on around ethics. In fact, we actually declined our first business opportunity for ethical reasons. We both believe that this is as important, not only to us as a company, but to AI as a future as the culture is, right? So let's talk a little bit about the importance of ethical considerations within AI.

Stephen Straus: Yeah, sure. Like I mentioned with culture, this was new to me. So, as we were talking to this company about potentially working with them, you know, without going into any details, because I think it's not relevant and the like, but, you know, it just did not seem like a good fit for us. I think the, you know, the ethics of the company I wasn't comfortable with. And I struggled with it. And we ended up, I ended up saying, let's not pursue them and let's not work with them. And I think back to earlier in my career, and I've thought about this, you know, multiple times, you know, in the first company I started, I did a bunch of work with Philip Morris, the cigarette company. And, you know, I just, again, I look back in kind of shame and embarrassment, like, why didn't I think about not working with them? It never crossed my mind that I should say no to them. I did work for them among a bunch of other clients. But, you know, as I was making that decision to say, let's not work with this company, I thought, wow, I wish I had had the confidence and the presence of mind to do that much younger in my career, much earlier in my career. And I'm really glad that we did it now because, you know, we have, as you know, turned down more than a handful of companies either because we thought that the companies didn't have kind of a strong ethical compass and it didn't align with ours or the projects that they were asking us to work on were things that, you know, were going to lead potentially to a future that we would not want to live in because it would be, you know, dystopian, dystopian future. And, you know, that really has led to something that has been I think really impactful for the company. And again, made this up, I made this, we made this up collectively on the fly, but it was something I'd never done before in my career. But as we were talking about that first client, we ended up turning those discussions into what became a weekly, you know, meeting that we have that we call the ethics discussion group, which we have now done almost continuously every week for the past six years. And getting back to that point about, you know, people watch what leaders, you know, do, I realized that, you know, I needed to attend every one of those or as many as I could. And I've, you know, I think I've attended 95 plus percent of those meetings over the last six years. I also really enjoy those meetings, so it's not hard to do. But, you know, we talk about the ethics of any projects that are coming up, if anyone raises any concerns. And, you know, we have also built out our ethics statement, we have signed some pledges of things that we are not willing to work on. And we continue to talk about those things. And, you know, it's been a, I think it's been a strong basis of the foundation of our company, that people can, you know, openly talk about whether they're, you know, whether they're worried about the kinds of work that might be coming in the door. And, you know, we give them permission to say, I personally wouldn't want to work on this project, or I wouldn't want to be even at a company that would work on a project like this. And, you know, in the end, I make those decisions, we call it the ethics discussion group. I call it that for a reason, because it's not a decision making, you know, body or organization, but I've rarely had to unilaterally make any decisions because we talk about things and generally come to, you know, the right answer out of those things. But it's been a strong foundation, I think, for our culture.

Ron Green: I agree because we can have dissent given room to express itself for any deal that may come through the pipeline and talk about it in a way that sometimes I would argue in other companies might hurt feelings, get a little ruffled feathers, but at my comfort of my eye, we just fly right through it because I really trust each other like we talked about before.

Stephen Straus: Yeah. And that's where psychological safety and vulnerability are a key component of the culture that allow us to have those conversations and allow people to feel comfortable to express those opinions, even in the presence of someone who sits in my chair. Me, but importantly, the position that I hold.

Ron Green: Exactly. So all right, so we were both around for the dotcom bubble. And I think a lot of people who maybe have gone through that transition look back upon those early days before the internet sort of was, quote unquote, monetized as being maybe the golden age. Do you have any sense now that we may be in that sort of same phase in AI we're going through maybe what be the pre-monetization golden age in any way?

Stephen Straus: Well, Ron, I think that that's really up to us because the implied in the question is that the world's going to get worse than it is today. And I hope that that is not the case. I think definitely in an early stage, and I think we are kind of naive because we don't know what's going to be invented and how these tools are going to be used for ill, even if we have a sense of how they're going to be used for good. And what I do say to people who are leading companies, who are leaning into AI, is that we all get to define what this future looks like. And so I'm hoping that the future is bright and that the future will be more positive than today. And I think we've learned a lot from the internet that you can't just allow things to happen. You have to be intentional about it and you have to be willing to make hard decisions. And it all comes down to leaders being willing to be strong and have model and it's helping make decisions, etc. And you find out that there's significant second and third order negative ramifications, then you need to pull that model or you need to retrain it or realign it to the goal that you are trying to achieve. It is unacceptable not to do that. And I look at the social media companies and think that they have failed in their, many of them, if not all of them have failed in their civic responsibility to do that. And it's all in the name of greed.

Ron Green: Yeah. Profit, and eyeballs.

Stephen Straus: Yeah. And I argue that you can build a more successful business by being ethical and doing the right things. People will be attracted to your company. I think there's a huge vulnerability in the social media space. Everybody knows these things are damaging us. If someone were to come up with a social media platform that wasn't, I think people would flock to it. And so I think that they have a big blind spot. And I think that same set of thinking is important to take into the age of AI as companies start to navigate it for themselves.

Ron Green: Well, as we transition towards the interior, we'll transition to a little bit of a lighter note.

Stephen Straus: Yeah sorry, I got a little heavy then.

Ron Green: No, no, that was great, that was great. We love to ask folks how they would choose to have AI. If they could make a decision, automate something in their daily life, what would you pick?

Stephen Straus: Okay, so I was struggling with this because on the one hand, I don't know that I want anything automated because I think I have too much tech in my life. And I think as people, I need to take my face out of the screen and get more exercise and definitely more face-to-face contact with people, especially coming out of the whole pandemic era, et cetera. So with that said, and this is very much a double-edged sword in what I'm going to express, but I told you that all technologies that have been invented by people have been used for good or ill. And I definitely foresee a day when we have AI capabilities that are able to coach us as people almost like a therapist to think through things and get ourselves unstuck from an emotional point of view and help us find a positive headspace to try to be better people and to better able to navigate the world because the world is hard to navigate. And I think that there will be AI capabilities that are as good as in a lot of cases in the moment as a good therapist to help you get unstuck. And the reason that I am also worried about that is that those could be used for ill. They could be used potentially as an example to convince you of something that's not in your best interest or not best interest of society. And so what I'm hoping happens is that through appropriate regulation and the ethical use of AI that we can have a world that has these kinds of capabilities without the downsides that could clearly come with them. And so again, I think it's back to we get to help make the future that we want to live in. And I'm very hopeful that we can make that a positive future.

Ron Green: Well, Stephen, I thank you so much for coming on Partner. I really enjoyed it. I was looking forward to this for weeks.

Stephen Straus: Well, thank you very much for for having me on.