In this episode, your host Ron Green is joined by colleague Alex Olden to dissect the multifaceted world of data science and its intersection with product sense in AI development.
Alex, a senior data scientist at KUNGFU.AI, shares insights on anticipating user needs, identifying opportunities, and aligning AI development with customer requirements. With a background in English and applied statistics, Alex brings a unique perspective to the discussion.
Episode Highlights:
-Understanding product sense through real-world analogies.
-Identifying opportunities through user-centric approaches.
-Balancing data-driven insights with intuition in AI development.
-Managing stakeholder relationships for effective collaboration.
-Navigating challenges unique to data science-driven product features.
Find Alex's blog on Product Sense here.
Ron Green: Welcome to Hidden Layers, where we explore the people and the tech behind artificial intelligence. I'm your host, Ron Green, and I'm happy to be joined today by my colleague, Alex Olden, to discuss the multifaceted world of data science, and specifically, how it intersects with the concept of product sense within AI. Today, we're diving deep into the realms of opportunity identification, product evaluation, and how to truly align AI development with user needs. Alex is a Senior Data Scientist at KUNGFU.AI, where he embraces the catch-all nature of the discipline. He's equally interested in exploratory data analysis and statistical modeling, and he's just as passionate about causal inference as he is about data visualization. Above all, though, Alex thrives in projects that hinge on collaboration and communication. Alex holds a BA in English and an MS in applied statistics from Villanova University. Outside of work, Alex enjoys writing, cycling, and relearning how to play the piano. Welcome, Alex, thanks for joining us.
Alex Olden: Sure, it's great to be here.
Ron Green: So you wrote a blog post recently where you tied together your experiences teaching with your experiences in data science, and you use a great analogy to kind of tee up the concept of product sense. Can you take us through that analogy and then define product sense to tee up the rest of the conversation?
Alex Olden: Yeah, for sure. I think the analogy that I used was talking about the short version of it is anticipating the needs of a student. So if I see a student walk into class, he starts writing something down, his pencil breaks, instead of waiting for him to raise his hand and kind of disrupt the flow of the start of class, I run over, grab an extra pencil, put it on his desk, no words or communication necessary, problem solve or back in business. So in the same way that I was anticipating the need of a student there and solving the problem as quickly as I could, in terms of developing AI, I think it's really helpful to try to anticipate the needs of customers. In our case, that can be our client that we're working with or our client's customers, their users, if you will. And so that can play out in a few different ways. One could be opportunity identification, it can be looking at a set of data and saying, hey, we've got this data, that means we could build this product for this user base. But I think another way of thinking about it too, is looking at existing products or services. I want to be clear that when we talk about product sense, that can be actual product products or services as well. But more thinking through, here's an existing product or service, how well does it work? Let's take the tires on it. Is it performing well? Is demand for it high and healthy? If it's a service or people renewing it, things like that to have a good sense of what our clients or their customers need and want and are responding to. But this can also be data-driven. We can go into exploratory data analysis and see how are these products performing? Are people using them? Are they re-upping subscriptions? Things like that. But a lot of it also can be just instinct and experience-based too. So I don't want to say it's all data-driven. A lot of it is getting a feel for it as well, I think.
Ron Green: In your blog post, you mentioned the importance of identifying opportunities with using product sense. How do you go about approaching and identifying new opportunities with your clients?
Alex Olden: Yeah, so there's a few things to keep in mind. One is always thinking about the user. I've heard a product expert speak once a while back. And they said, if you're designing a basketball and you think about the basketball, you've already taken a bad first step. Because you want to think about the user, the basketball player, and think about what their experience is going to be like. And so you can kind of take that analogy as far as you want. But I think that gets the point across for now. Another thing that I'll say is it's really important to avoid products in search of solutions. Sometimes when a new technology comes out, or you're playing around and you learn about this new algorithm, you're like, oh, this is so cool. I need to go find a way to use it. And it's like it's a good almost fit sometimes. But if it doesn't really fit and it doesn't ultimately serve the end user and their experience in a beneficial way, you've got to just hit pause and wait for another opportunity to use it.
Ron Green: Right, right. That idea of having a solution in search of a problem.
Alex Olden: Absolutely. And the last thing that I'll say is that follow the data too. If you, let's say, you run like an online kitchen supply store and you see a lot of customers manually adding like a food processor to their cart and then manually adding like a peeler presumably to peel the food before it goes in the processor. Like that's a good sign that those products should be bundled. And if you see customers buying or putting a food processor in their cart, you should recommend throwing a peeler in there too. And that's where the data can really influence product development as well.
Ron Green: How do you approach the problem of sort of evaluating product opportunities on your project, like assessing them?
Alex Olden: Yeah, that's a good question because it can be a little bit kind of nebulous and trying to predict the future a little bit. But I think it's important to, you know, again, look at any existing data that you have to see if there is a demand for a product or if there are similar products that are being used that are maybe doing well but not quite fitting all of a user need, if there's like additional opportunity that could be added by adding another offering or things like that. You know, from our end at KUNGFU.AI, of course, we spend a ton of time doing subject matter expert or SME interviews. And that's a really good way to get to know our clients and their needs and the needs of their customers. So that can fill in a lot of that. And then, you know, communication along the way, like having you usually have like a regular weekly meeting to give an update on here's what we've done, here's where we're going and get feedback from the from our clients right away. And having a lot of asynchronous communication to just like, hey, like we put together a model here, the results, like, let us know if you think we have a minute. So just constant communication, but not over communication, right? We always like to defer to the client to get a sense of, you know, what pace do you want? Another thing that's important too, is to just from the beginning after those SME interviews and the project plan is laid out, get a sense of like, you know, kind of this like holy grail document of here's the goal, everything that we do should be rowing in that direction. And anything that is not we need to reconsider. And that way, we're in the same page. And it also really puts a lot of power, I think, anyway, in the seat of the client, because it makes the project client driven and KUNGFU.AI guided rather than the way around. Because we want to facilitate, we want to enable, obviously, we're doing some work for them, but we're not we don't want to be straight up telling them what to do too much because they know their users better than we do for sure. Right. So that's a key part of it as well.
Ron Green: Yeah, I couldn't agree more. And a huge part of it I always feel is the fact that we're almost invariably working in a really, you know, complicated domain that our clients understand at a level that, you know, we'll never get to.
Alex Olden: That's a good point.
Ron Green: You know, and so making sure that we walk in with an open mind, let them be the domain experts and drive the initiative from sort of a return on investment perspective and not just leverage technology for technology's sake. So what about the statement that, you know, AI development should start with the end user in mind? You know, how do you make sure that during this process that's actually done?
Alex Olden: Yeah, a lot of it has to do with what I touched on in the previous question about, you know, from the beginning, beginning with the end in mind, in this case, the end being the user and having everything that you do kind of flow from there and pointing that direction and documenting that stuff early on. And then when it comes time to kind of get into the work, you know, from the beginning of the process, which is at least like the technical work I should clarify, doing your exploratory data analysis, you know, you want to get in and look and see, you know, is the work that I'm doing to get more information about this company, this client, this customer, what have you in the data? Is that kind of in the direction of this goal that we've previously established? If not, you know, you need to recalibrate again. And then from there, just checking and keeping the customer or the client updated along the way to make sure that they are seeing what we're seeing and understanding it and agreeing that it's useful and just always keeping that focus on the end customer, user, what have you. And that's just from the beginning. And, you know, there's a whole process, of course, to development that comes after exploratory analysis. There's, you know, testing, there's development, there's prototyping, there's experimentation, checking the validation metrics at the end, all that stuff. Along each one of those steps, it's got to be coming back to like, you know, how does this benefit the user? Like, where are they going to fit into this? Yeah. And making sure that that really does serve them.
Ron Green: I see this quite frequently where maybe you can be halfway through the journey on a product or a feature development cycle and you can kind of lose sight of what the initial impetus was, right? And what the user will receive or get out of this and you can lose your way even though you may have had a firm initial foundation. So when you're approaching sort of the early stages of data exploration, how do you tie the analysis back to the user to make sure whatever AI tool or capability you developed does serve that end user?
Alex Olden: Yeah. So just to dig a little deeper on the EDA part of things, you know, I mentioned previously talking about kind of like EDA in the direction of a certain thing. Like that's the first point. So, you know, I do a lot of EDA as a data scientist, of course, when I dig in, there could be tons of data. And a good way to kind of call it and focus a little bit is to think about which pieces of data or which data sources are going to be associated to this previously agreed upon goal, which is whatever the user is going to be using the product or service for. And so that can kind of window it down. And not to say like we are just like disregarding important data, but that's how you make decisions about what to really dig into. And then from there, of course, you're going to look for, you know, correlations between different features, null rates, what's clean, what's dirty, what can be used, what might need a little bit of touching up. But really looking for kind of an early descriptive statistics that will start to answer questions about either user experiences or user preferences early on. So if you see, you know, to go back to what I mentioned previously about this hypothetical, like online kitchen supply store, if you see users buying certain types of products together, like that can be a key insight into user behavior that may serve in the long run. But it's really just making really intentional decisions about which data you dig into and prepare and organize that are going to drive everything downstream. Again, algorithm selection, development, testing, just in the direction of answering that kind of North Star user use case in the end.
Ron Green: Okay, okay. Have you ever developed a solution and then once you were done with it, you realized you really didn't have a customer need, you kind of missed the mark?
Alex Olden: It's tempting sometimes to, I will say, I've not fully gone down that road because I've kind of checked myself or I've wrecked myself. It's really tempting sometimes when like a new technology comes out to be like, oh, this is super cool. I got to find a way to use this. Like I can't not use this. So there's that temptation. And I think that can flow in from like the development side that we're on at KUNGFU.AI, but also probably for some customers and users too of like ChatGPT is super hot right now. And so it's really tempting to be like, oh, if I'm not using ChatGPT, like I'm doing something wrong. But like to clarify, like in the world of AI, ChatGPT is not AI, you know, like a calculator is AI. And so you can start there and you go to ChatGPT in the end. And then the middle, you've got like your bread and butter, like decision trees, your logistic regressions. They're great at solving like predictive problems too. But using the right tool for the right job is super essential. And that's a good way to kind of keep things in check because otherwise, you know, we can waste a lot of our time as developers of AI doing things that aren't useful for our clients. But on the client side, to give like a little bit of advice there, it's really important to kind of see the long term and be intentional about adopting whatever AI tools you adopt. Because if you put time, money, resources into adopting ChatGPT and it's not useful, like one, two, three years later, you're not going to have a useful product for your customers because they can't use it. And so you want to be more careful and maybe adopting something that's a little bit less flashy, but gets the job done because that's going to serve your customers in one, two years, three years down the road, you're going to be getting a return on your investment. You're going to be in a really successful place. And I think about this a little bit of like, you know, meme stocks versus like index funds, like it's slow, steady, long term growth, doing the right thing in the long run versus like throwing a lot of money at something that is maybe going to be totally devalued in three months. And not the ChatGPT will be devalued, but it's more on, you know, getting what you can and getting the best out of your monitoring and otherwise investment in these AI tools and playing the long game.
Ron Green: Okay, so I couldn't agree more with that. Don't just gravitate and jump on the bandwagon for whatever the current hotness is. So this segues perfectly into my next question, which is, when you're working with your stakeholders and you're building relationships, that can kind of be a critical component to being able to have frank conversations around what may be applicable or not. So for example, how do you manage the relationship so that you could tell your client, look, I think, you know, ChatGPT integration, that might not be the right move at this point. We might need to do something else, even though they internally may be getting pressure to leverage some new hot technology.
Alex Olden: Yeah, that's a great question. And I mean, I'd say what I just said in the last question, of course, but I think keeping in mind the idea that like, you know, data -driven or intuition-driven, whatever driver there is of this idea of product sense that I'm going after here, it is not designed to be kind of a source of tension and, you know, adversity with a client, like that may happen, but ideally it's a collaborative effort. So ideally these are the same things of like, we're going to develop the product to do surface X for your users that we've agreed upon early on. We're giving you these regular updates. We're having these regular meetings. And so it's really a collaborative effort. And again, to what I said previously, like we want it to be KUNGFU.AI guided and customer-driven, not the other way around. And so by us taking the time to really dig into their data, talk to their subject matter experts, get a sense of what's important to you. Ideally, you know, we're kind of showing our investment in you and understanding you as a client so that it's clear, you know, we're rolling in the same direction here, like I said, it's a collaborative effort to get to the end goal of serving your users. We're not coming in and saying, use this thing. And like, that's the end of it. And if a client feels kind of pressured to use something else that we don't recommend, you know, I want to continue the conversation, but also try to make it a little bit evidence-based and like going in and saying, you know, here's what we saw in exploratory data analysis, which is why EDA is so powerful. Like here's what your users are doing. Here's why this would benefit them. And then, you know, we can dig in deeper from there.
Ron Green: Okay. That makes a ton of sense. And so any advice on sort of stakeholder relationship management as a part of this flow, this process of tying sort of raw data and metrics to what can be sometimes a soft human decision on what to actually focus on?
Alex Olden: Yeah, that's a great question. I think one way to do it is to really show that you've listened to them. So through your subject matter expert interviews, the thing that you learned about their company, include that in what you're saying back to them. So that it shows, you know, we're not just taking this like one size fits all information that we throw at every client we come across, but more thinking through, here's what you've told us, here's how we've organized it and distilled it into like a clear set of steps that we think can really benefit you and your users, whether they be internal employees, outside users, whatever the case may be. But that just shows that we're listening, we're paying attention. We're doing these really customized solutions to your problems. And I think that's been a really good way for us to develop kind of client, you know, trust and investment in the same way that like in my teaching days, like that's how I showed students that I was paying attention. Like I knew their homework habits, I knew what they needed. You know, there's this phrase that I heard when I first started teaching of like, students don't care how much you know until they know how much you care. And like, that's a great way to show that you care, it's that you're paying attention, you're listening, you're writing down, you're anticipating their needs to go back to the idea of product sense in the first place.
Ron Green: Oh, that's a great quote. I've never heard that teacher quote. How do you deal with instances where you get customer feedback or client feedback and it actually changes your direction? Like, have you been involved in a project where you had to pivot or change direction sort of mid-development?
Alex Olden: Yeah, it's a great question too. We were working with a client once who had a project focused around resource allocation and trying to optimize and get the most out of the resources they were devoting to different tasks. And we built a model that predicted, hey, if you make changes X, Y, and Z, you'll see this improvement or estimated improvement in your return on investment for those resources. And this was just a prototype model. It wasn't going to be deployed or anything like that, but more just a proof of concept to show like, yeah, we can do this. And we send that back to them and they liked it, but they wanted to get a little bit more granular on their ROI and say like, you know, at which levels of this resource allocation, like, you know, level A is like they're using it a ton, level B is kind of moderate usage, and level C, let's say, is like really kind of low usage. Like, is it going to be only like the highest burning category of this resource that gets the ROI and like B and C are just like hanging it out? And which is a great question. And so we had to go like back to the mental lab and kind of think through how to do that. And so we landed on approach that kind of broke out the ROI by allocation level A, B, and C to show, you know, here's the projected level of return on investment for levels A, B, and C and showed that there will be, you know, an improvement for all three levels. For all three categories. Exactly. And it's not just going to be like level B pulling A and C up with it or something like that. So it was a really interesting question. And I'm glad the client asked it because it's the sort of thing we're probably using again for this type of kind of projection.
Ron Green: Does that happen frequently, where you'll have questions asked by the client that kind of drives more of the discovery process in a direction maybe you didn't anticipate?
Alex Olden: Yeah, I think so. It's maybe 50 -50. I'll have them ahead and some clients are happy to sit back and kind of take our advice. Others like to be more active and I think there's pros and cons to each. We always like an active client in the same way that you like an active student who's asking questions and drawing the discussion. But in some cases, you start with the client and you just click with them and you're on the same page from the beginning and it moves faster and that's not better or worse. It's just a different model for collaboration. But we always learn new things from clients when they kind of poke and prod that stuff too. So it's a lot of fun that way.
Ron Green: Okay, earlier on you were talking about product evaluation. Are there any specific metrics that you'll use commonly as a part of that process when you're trying to assess the value or the viability of some specific approach or solution?
Alex Olden: For sure. Kind of two buckets of metrics. One, I would say, is around if you have any of them, because if it's a new product, you may not necessarily have these, but kind of what the marketplace is saying about it. What are sales like? What is demand like? It's a subscription service. Are people renewing the subscription and continuing to use it? Things like that can be a good sign, but there's also kind of more internal metrics that can be used for product development that you might use even before you go to market. So let's say you're designing a toddler shoe, something very relevant for me. You'd want to look at the durability of the shoe. How long does it last? Like the stability toddlers aren't always great at walking. Like they need a stable, supportive shoe. So doing that kind of in-house testing and evaluation, but I'd have metrics around that. Like in user testing, how long does it take before the leases give out or what have you? And it's kind of interesting to note too that those metrics like durability, stability, longevity, those are essential to AI tools as well. Always want to have those in mind when we're developing. Okay.
Ron Green: Are there any considerations or challenges that are different when you're dealing with data science driven or AI driven product features that you need to consider differently than you might in traditional software development?
Alex Olden: That's a good question.
Ron Green: I'm thinking, given the probabilistic nature maybe of a lot of the work we do, sometimes the investment can be so huge and the outcome can be, it's deterministic, but it's still probabilistic.
Alex Olden: Yeah, I think it's the Yogi Berra quote where it's like it's hard to make predictions, especially about the future. And so we're doing a lot of that. You know, we're making predictions or if we're making a recommendation, like what if people don't like the recommendations? There's a little bit more what if to it. And so I think, you know, when we're presenting results or a product to a client or when even early on, you know, you don't want to do this at the end, but kind of couching our preparation and working proposals in that type of language of saying like, you know, we're here are our expected outcomes. Here is what we're predicting and then using confidence intervals to, you know, express a level of confidence so that, you know, we can say like, there's a little bit of wiggle room here or there's a lot of wiggle room here. We're just not that confident because it's not a lot of data or something like that. So I think that's a big part of it, too. And, you know, because of the world that we're in right now with a lot of focus on AI development, it does feel a little bit of higher stakes, like a publicity standpoint, too, which is something I'm less qualified to speak about, but it's something that's there. And I think it's something to keep in mind as well.
Ron Green: Okay. Yeah, my mind goes to just the fact that it's probabilistic and especially with all the focus on generative AI right now. I think it's really, really critical that we guide our clients and that they have the understanding of just how complex and unpredictable some of these services or tools can be when you actually put them into production. I wanted to go back and ask you, you were a teacher before you got into data science, talk about that transition. There's a lot of similarities there I think under the under the surface.
Alex Olden: For sure. Yeah, it's funny because what actually got me on this kind of career change was data. I had a homeroom class and they had kind of like weekly mini goals around like homework completion and things like that. And if they did, you know, a certain level of completion by the end of the year, they got like a big prize the end of the year on like their end of your trip. And so to motivate them about halfway through the year, I was like, I wonder what would happen if I just put up like a weekly line graph to show their progress? And they loved it. And it was super basic just put together like a little Microsoft Excel thing. But they really liked seeing their data, which was a key insight for me. And I was like, everyone likes seeing their data. This is not surprising, but it's something that like, so obvious, I couldn't live it and think of it, I hadn't thought of it already. And so I got more curious about like the use of data. And, you know, read some books on it, signal the noise was the first one, and then kind of went deeper and deeper and then started like looking at graduate programs to say like, I have an English degree in undergrad, I'm not quite qualified to jump into data science yet, where can I learn what I need to learn? And so when the grad school first statistics and in statistics, I worked with a professor who exposed me to the idea of data science. And I don't know how much this is used anymore. Maybe it still is. And I'm, I'm, you know, not super into it anymore. Just, I don't know, we can edit this part out. But the larger point is that like three part Venn diagram of like subject matter, expertise and statistics and coding. And that just like fascinated me of, you know, you become knowledgeable about a certain industry, you have the coding skills to actually build the stuff and you have the stats skills to like understand the math of what you're actually doing. So it's a black box. And so after my grad program, I did a bootcamp to learn how to code and then started teaching kids how to code. And that's where I met Reed. And he introduced me to more kind of people in like the Austin data science software industry. And it's kind of just...
Ron Green: and you're off to the very start.
Alex Olden: Here we are.
Ron Green: Here you are.
Alex Olden: Yeah, it's not a cool transition, but I mean, the number of times in a given week that I find myself referencing something that I learned in teaching is pretty high. It is not surprising.
Ron Green: We've talked about this before in other podcasts, but at KUNGFU.AI, the diversity and backgrounds is one of my favorite things from all the different team members. We've got people with chemistry and biology and mechanical engineering and computer science, stat, math, et cetera. All right. I want to wrap up. Alex and ask you our standard finishing question, which is if you could have AI automate anything in your daily life, what would you go for?
Alex Olden: Oh wow. It's a lot of good things to choose from. I think I want an AI tool that will help recommend and inform when to feed my son what. So not just like, you know, I mean, pretty soon I made a set like clean everything up or do all the preparation, but what I really want is like, you know, predict like the most likely thing that he'll be into for that meal at that time, like, just lay that out for me and I'll take care of the rest, that's, that would be gold.
Ron Green: Yeah, feed the young children. Yeah. I love it. Well, thank you so much for joining us today.
Alex Olden: Yeah, thanks for having me.
Ron Green: I really appreciate it. It was a fantastic conversation.
Alex Olden: Absolutely.