Video: AI in product analytics: Hype vs reality | Duration: 3492s | Summary: AI in product analytics: Hype vs reality | Chapters: Welcome and Introductions (3.2s), AI in Product Development (263.48s), AI-Driven Team Transformation (657.45s), AI Workflow Discussion (936.175s), Product Development Challenges (1000.32s), AI-Powered Product Development (1095.245s), Measuring AI Impact (1269.935s), Measuring AI Quality (1410.965s), Automating Team Workflows (1738.095s), Informal Knowledge Sharing (1917.885s), Scaling Evaluation Process (2116.345s), Evaluating AI Value (2237.23s), AI in Analytics Workflow (2400.525s), Future of AI (2600.825s), AI Agents' Future Impact (2938.745s), AI Transforming Industries (3090.3s), Webinar Wrap-Up (3258.1s)
Transcript for "AI in product analytics: Hype vs reality":
Alright everyone, welcome to AI webinar on Product Analytics. This is very exciting. We've got people joining. As people start to come into the session, I like to just see where people are and, have you jump into Q and A and warm yourselves up. We want to make this as engaging as possible. Joined by Ayushman. Ayushman, welcome to the webinar. Thanks for having me. We're going to go through a brief, expectations of what we're covering today but fully transparent. What we do wanna do is make this as engaging as possible. So if you are on the webinar, what I would really love is if you typed in your location. So if you're in a city, I always love to do a little bit of warm up to make sure that we're, we're getting our fingers warmed as we go through this. And we're gonna do a very quick introduction. There are no slides today. We are literally going to be having a conversation, so the more engaging we can make it, the better. And Ayushman and I haven't even done a briefing session together, so this is as good as it gets. We've used AI for briefing and now we're going to hit the webinar and so hopefully we can make this as as useful as possible. Emma's in Cape Town which is awesome. Ayushman, you're on the West Coast. Why don't you introduce yourself? Didn't hear enough, about. me. I'll intro. myself and then we'll get going. Let's do that. Hey, folks. Yeah. I'm joining from Vancouver, BC today. And I currently lead in Microsoft, the team that builds, the CRM application. So We are the number two CRM in the market after Salesforce. It's called Dynamics, and I'm currently working on AI transformation for sales organizations. Essentially, everything to do with how can they use AI agents to get more done, to increase revenue without having to increase, you know, boots on the ground, and solving very interesting problems as we go through that. I've, in the past five years, kind of worked in marketing and sales applications at Microsoft and mostly on generative AI. In my career, I've worked on, you know, cash cow businesses, million dollar businesses, and have helped improve, you know, small businesses, you know, done zero to ones instead of one to ends. I've also been a product marketer as well as software developer, so I can think from the different perspectives. And looking forward to just, talking to you today, Dave, and sharing some insights, collectively with folks here. Incredible. We got a lot to unpack. There's a lot in there. I can see people in the q and a as well. I'll do a quick intro and then I'll say hello to some of these cities, and then let's unpack, like, how you're using AI today, the advice we have for it, hype versus reality. There's a lot to unpack and and we're going really fast as we go through this. So my name's Dave Anderson. I work at ContentSquare, which is a leader in digital experience analytics. Previously, my experience with I'm I'm a product marketing leader. Previously, my experience has been working with Dynatrace around AI back in 2017, And then I was an AI evangelist at one point, which is a terrible name for a job, back in, during COVID days with a company called DataRobot, just doing pure AI. And now I'm also doing positioning around AI agents and the future of AI. And, and so I'm really excited about what we're gonna do and and really unpack actually what's hype versus reality. I am someone that uses AI every single day across LLMs, using Lovable to build things and prototype apps that I have, building website prototypes, obviously copywriting. My wife thinks, the she questioned me on my lovable bank statement, charge and what that was. So, and just a big shout out, thank you to everyone that has joined the chat and told us where you're from today, Cape Town, London, Gaurav. I'm in Boston as well with my thick Australian accent. We have Manchester, Gloucester, Hitchen, Bolton, London, Surrey, that's a beautiful part of the world. Deborah, another Boston native. Weather is okay today. So far, it is getting cold. But, let's get into it. If you're just joining, please jump in the chat. Let's keep it as engaging as possible. Yes. Go Pats. Go Celtics. I have my Jaylen Brown right here. And, there you go. So so Ayushman, AI, hype versus reality, you you we've gone it's incredible to think in the last year, agents wasn't even a word. And now all of a sudden we're talking about agents. Like, give us the lowdown. Like, is this is this practical? Is this something that you're using in a daily basis? And how are you leveraging AI? Yeah. Great question, Dave. There's a lot of reality about AI and a lot of hype as you said. Let's unpack both of those. I think in terms of reality, there's an immense, like, you know, application of the technology in a way that has never been possible before. It's solving in solving problems that in traditional tools was really impossible to solve. Yeah, it it's an it's a very empowering technology where everybody, you know, from, people who work in technology to just normal people, like, in on day to day basis are now using AI to get stuff done. And so, you know, great examples of how we use it in and on on the product side is just, essentially all the way from customer feedback synthesis, ensuring that every call is recorded, to be able to then get insights from those calls versus having to transcribe while the customer is speaking and then have those notes somewhere, take time to curate them, which nobody really took in the past, and the customer feedback is all kind of messy. Things were forgotten after, like, two months. Now we have a complete, you know, bank of the customer data that's automatically, being recorded, and then the insights can be generated from AI. Then, product analytics where we are actually now able to see the trends, from both the usage data as well as customer journeys and our customer feedback and triangulate them to see interesting patterns that would be very, very time consuming earlier, to just getting, you know, PRDs written by AI and then, going deeper in the product final, actually doing, as you said, prototypes and not just, you know, a a PRD. Yeah, with AI also doing, like, you know, designs and and and making it very, very easy to collaborate with different functions in engineering, to then finally, you know, writing great customer reports when customers succeed and and sharing that with the world or just with, you know, people inside the company. So this whole supply chain kind of has been disrupted and there's a lot of use of AI. There's also the hype around it, which is, I think there's, this notion that, AI is just this magical tool that you just adopt and things change overnight. And I think, to truly benefit from AI, you need to think about culture, technology, you know, processes, people, and have a planful, you know, adoption, so that, you know, one plus one equals three and your whole team is, you know, swimming in the same direction. And, that hasn't that is not an overnight thing. It takes time, and it takes deliberation, and it takes a plan. And and so, you know, that's really the hype that AI is the magic wand. This is such a good point. So like and and I've already gone off script. Okay? Because I like, I really wanna answer these questions. I wanna know. I feel like we're all in the same boat. It's really hard to plan because the technology changes really fast. It's also really hard to plan because individuals have different levels of maturity when it comes to AI. So are we using the same set of tools? And what I would love to know from your perspective as you're leading a team, how do you culturally make this work? Is this something that you quarterly get together and go, these are our workflows and how we're using AI and is a, is it a single center of expertise that's leading your AI that's educating you or are you self learning and then self teaching each other best practices? Yeah. That's a great question. So I think it really starts from a culture of innovation and a and a culture, to always be on the cutting edge. And and you cannot create that overnight, and you cannot create forcing functions or directives. It comes from it comes from that growth mindset that your team has, and and and and leaders have to create that. And a lot of that happens through modeling. And so initially, it was really around me just, like, sharing, you know, how am I experimenting with AI. And sometimes, like, just, like, even sharing the the blooper reels of how I'm failing, but trying. to use AI, but just being very candid and, transparent with the team that at least I'm trying. And whenever I get a chance to apply AI on solving a problem, then sharing that with the team. For example, like, you know, there was a documentation page with which customers were always getting tripped up on and they were failing at that states at at that step. And I was able to just, like, take those screens from the product, you know, put them, in ChargeGPD and ask it to create, like, you know, scenario based documentation for different kinds of customers, which instantly, like, made it 10 x better. But then I shared my, you know, methodology with the team and they realized, oh, I can start writing documentation with AI, which was never, something that they realized before. So just these organic moments of discovery and sharing and transparency and modeling that from the leaders down and then creating a culture of a growth mindset and learning, has been kinda key to unlock some of that innovation. The second thing is just, you know, we are in the space where we are building AI agents and so we have to be on the cutting edge. We have to adopt these tools before we ask our customers to adopt them so that when we talk to them, we can actually be more knowledgeable and, appear as, you know, actually experts who have, you know, also practiced. And so, you know, the whole thing around quality and how do we measure quality and how can we do EVADs? That is a component of how we actually do things now. It's the part of definition of time. And so there are some things like those that we have just baked into our process and created forcing functions around and that has been working as well. And thirdly, I think people, let we should not underestimate people. They're just hungry and curious. They're ready to adopt this technology. They see the benefits. They see so many, you know, people talking about great ways of doing it. There's a I love the clear about, you know, how I AI podcast where, you know, different people kind of just come and present how they use AI in their workflow. So there's just so much energy around this that it's infectious. And unless you're living under a rock, there there's no chance you're not getting, you know, catching that infection and also just trying to let do it. And so I think those three things, just modeling the right behavior and transparency around, you know, what you're doing. Secondly, just, you know, creating certain forcing functions that you really feel are necessary for your product. And then thirdly, just, you know, infusing yourself in the energy that is being, yeah, driven around this space. And have you so you, you mentioned inquisitive and the importance of culture around experimentation. Not, not all teams are like that, but, but I'm, what I'm interested in also is, is your team literally in the experimentation phase or have you set like OKRs associated with AI efficiencies? Like, have you got to the point where you're actually formalizing and now it's part of like a team charter? Yeah. A little bit of both. I would say that it really, is hap like, starts from the top and it's happening across a lot of different companies. But the one part that is not good about this is companies have started to reduce the workforce already in anticipation of AI boosting our productivity. And that's happened with my team as well. My team has been reduced from, you know, 30 plus people to six people, and my portfolio is the. same. I still run a billion dollar portfolio. And so I cannot my team cannot afford to do the things that they were doing the same way. We have to figure out a way to change. Otherwise, we'll always be working twenty four seven. Right? Yep. So that has been an incredible forcing function for us to change, and that's happening across industries, actually. I've worked with customers in manufacturing and finance, and I see that happening across the board. But that's the unfortunate, you know, outcome of this. However, it does create that forcing function. The second one is just, you know, some of the, OKRs are around the practices of software development and how we go to market. And so the fact that we now have to think about, you know, quality using EVALS and that's part of our PRVs, is one forcing function that we have, you know, from the very top down, like, from our SVP down that that comes, as part of this project to transform the way that we do product management. Similarly, the forcing function for doing more, I would say customer engagement and this model of forward deployed engineering where you actually go and work with customers and enable their scenarios, even if it's a bespoke enablement that you would never do as a vendor, you would do a system implementer. So those kinds of things are baked into the OKR. So it's kind of a mix of, you know, the forcing function that is created already by this reduction in workforce, then the whole learning mindset, and then the thirdly, the the OKRs that have been set. You just shared some pretty big stats then. Like, your your actual, focus or your output that you need to achieve hasn't changed, but your team size has changed from 30 people down to six people. So you're kind of forced to leverage AI in order to improve efficiency. That must be really hard. I had this concept a long time ago that in the future, AI was coming in to make our jobs easier, not harder. But it doesn't feel like for those people that are embracing AI right now, it doesn't feel like our jobs have got easier, we're continually on this treadmill that with new technology comes new learning and it just means we have to keep going. Do you agree? It is true. But now I look back to when there was when we had the industrial revolution. I wasn't obviously born. Most of us weren't here, but, you know, people went through the same change. And if I fast. forward, like, fifty to hundred years since from then, I think we are doing better things with our time, and we have adapted to this new world. New kinds of jobs have been created. Right? So there's this inflection point that happens in history from time to time and inflection is painful, but there's clearly people who navigate that inflection with a growth mindset and they embrace that inflection. And, you know, there's always salvation at the end of it where even if you don't have the same job that you previously did, there will be something better. But then there's also people who can get an get into analysis paralysis with that inflection point. and get, you know, too sucked up about loss aversion and thinking about, oh, like, my job is, is is is, kind of threatened by this change. Mhmm. And I would say that I don't think AI necessarily is kind of after your job. If anything, it will make your job easier, but somebody else who's using AI might definitely steal your job. And so, yes, the artifact detection point, you know, there's no reason to, kinda hide that. There's no reason to hide the struggle, but that's why it's so so important for us to collectively share what we are learning and how we're growing and embrace this as a once in a lifetime change that we have to navigate through that growth mindset, through learning and embracing and emerging as winners on the other side. Before I ask you the next question, what I do wanna do is do a pulse check with everyone. How are people feeling about AI now that we've had this conversation already? Are we feeling, are we feeling optimistic and positive about the future of AI, or are we feeling concerned and it feels like hard work? So please jump in the chat. Let me know how you're going. And, and if you have questions as we go through this whilst you're typing away, feel feel free to help us direct this conversation. I want this to be a community led conversation, not just me. Sachi is saying definitely feels like we're on a treadmill. And, it's a place that I like being actually. I'm a big Peloton advocate, but anyway, let's, keep that aside. Take us through the day in the life, Ayushman. Take me through, like, your workflow, tools that you're using, and even, like, a, like, a before and after. Like, give us, like, a, what are you used to do, and, like, what are you doing now? Like, what is the, what's the day like or the week like now with AI and and the types of tools that you use and the benefits you get from it? Yeah. That's great. So, initially, I think our, time to market was quite long for every single thing that we built in the product and we had to start with, PRD, which would take a couple of weeks. The customer discovery around it would be painful because, you know, we are scheduling calls and then frantically taking notes and trying to make sense of the themes and patterns, and it's very hard to get rid of your own biases in that sense. You're always kind of, like, trying to, you know, omit the patterns that you don't kind of wanna see and whatever hypotheses you started with, you kind of wanna validate. And so the problem definition, is, even though it takes long, has all had always been traditionally kind of biased towards a PM's thinking. Then we would go into, like, development or, like, design, and and it would take another another few weeks to get to the right design iterations. And each time the iteration would happen, it would be somebody actually doing the those changes on Figma file. And then engineering would take that over and, like, you know, write code from scratch. And so and then the and the analytics after that, just observing the patterns and where the customers may be getting stuck, you know, where they might have pinpoints was not very clear. You could use MAU, GAO as kind of proxy signals, lagging indicators, and that was definitely happening across the board. But the causality and correlations were maybe getting buried in the data because there was too much data or maybe not accessible enough for people. And so that was the workflow before. Now today, I think we have a workflow where I can actually do a lot of this myself without actually even meeting a team. And so if I were to go through, like, my process of how I would do this, I would just start with a PRD with very little effort. My PRD is ready in one hour. I know the problem, the PRD is spit out by chat GPD and I'm not just using simple prompts. If you're still using prompts to write your PRDs or to get work done, then you're probably not getting the best outcome. You have to put in some work and so I've created an agent essentially using ChattGPT that basically has examples from my prior you know, PRDs, examples from the industry. I have, provided it, like, all the links to the talk leaders, in the in the product management space. So it kind of knows how to be a product manager. And so I'm asking my virtual product manager, hey, write the PRD for me so it spits out the right things. And then I validate everything that I've written through these customer calls where I'm not transcribing anymore, and I can just, you know, use Microsoft Copilot because it's recording everything, after the fact to say, hey. Based on the five calls I had this week, tell me all the things that are validated and invalidated from this ERD that I wrote. And so I'm probably, like, done in a week. And then I go to Lovable or, like, you know, as I said, like, there's so many tools now. Actually, I recently used, you know, Gemini AI Studio and, the bio coding there, which is actually way easier than any other tool I've used. I've used Replica as well in the past. I was able to get to a great, like, prototype without actually going through the whole traditional design engineering process, and then just deliver that prototype kinda to engineering to, you know, just take it to market, and well, the red teaming and DSP and everything that we do. And so that process is now probably, like, you know, one to two weeks long. That would, you know, take kinda six months before. And so that's just a rough kinda example. And, again, documentation, now I can just write that with AI as soon as the product is getting ready. So it's very, very easy to get so much more done, at such a, small fraction of time. That's a great That's a great examples. And and Emma just wrote a really good comment, saying I've struggled to get meaningful productivity gains, but I've found it useful for reframing approaches and narratives. It's interesting, and maybe it's a case of like, sometimes you forget how quickly we've changed. Like you just mentioned the fact that you've, like, automated the summarisation from calls using Copilot, using chat GPT to write up your product descriptions, like, which would save you hours. Have you tried quantifying this? Like, have you actually put any quantification behind how quickly you're going, or is this just literally how you work now? It's like you you don't really put a time around how how long it used to. take you to run a landing page manually. So why. do it now? We automate it. But. have you done that? Yeah. I think the key result here is the time to market. Right? And. if, I can take now a feature to market in six weeks, that would initially take, like, six months or earlier take six months, then that's huge. And. that kind of shows the inputs. Right? The the productivity gains an input to time to market. It's not the output variable. So, clearly, if you are able to take something to market faster, something is happening in the inputs, and and you know that you're you're kind of getting a big productivity boost. So that's one way of measuring it. But if you. don't see that happening, at least in your workflow, you should be able to, like, really see at each stage in your process how much time you're compressing and then over, like, two to three, you know, projects, basically try to aggregate and see how much time you're saving. But it does take some deliberate kind of thought of, oh, let me actually start this process with AI. Yep. Yep. Like, very easy to fall back to our old practices and then think of, oh, I actually spent a week writing a PRD. Now let me use AI to, like, summarize it or, like, reframe it and, like, brainstorm. But, you know, force yourself and say, there is no way I can do this. I don't have time. And I had to do this because as as I said, my team got reduced, in terms of capacity. So any new thing, my team couldn't pick up anymore. I had to think that way. Yep. And so when you do that and then you deliberately measure the time, the productivity boost, and then you measure the time to market, you will start to see some gains. Yep. I think there's like, I've been pitching a lot, and and I work in, like, kinda, you know, product analytics and digital experience analytics with companies. But I've been talking a lot about how, like, AI is a tool, but at the end of the day, it's still the experience that people get as the outcome. It's really interesting what you just said then. You took out the, like, basically how you work on things, but the outcome of what you're doing is improved feature improved time to market for your feature. Do you do you also measure like, have you seen any uptick in the quality of what you've been able to release given that you can learn faster, write faster, release faster, do you also see an improvement in adoption or, satisfaction as a result? Yeah. Great question too. And so I think I did this session in the ProArt Alliance conference, I think two years ago when my team was working on generative AI still, but it was very hard for us to really measure quality in the lab because of probabilistic experiences that old, you know, regression tests and unit tests approach didn't work anymore. And, you know, while we were relying on human judgment to measure quality before shipping it, But then what was happening is customers were using that same technology in different ways that we hadn't perceived or with different data quality at the back end that we hadn't quite anticipated and they would get different results and satisfaction was all over the place. Some customers were escalating and complaining. And so, you know, at that time, the tools that we had were very limited. Like, we could do we could address issues customer by customer and diagnose, okay, what was going wrong with that customer and then try to fix it, but progress was incredibly slow. I think we have taken one year where we've released this capability of Q and A with, with data because one of the projects I run is a customer data platform where. all the, you know, sources about your customer's engagement are merged together and created, and we create a golden profile. And so you can actually see each for each customer, what is their loyalty, how much they're engaging with you, and so on. And so, you know, previously, obviously, with that kind of insight that we've kind of reached data, getting the insight was hard because you had to kind of create segments or create some sort of like business attributes, then get some insight. But we made that very easy through a natural language Q and A. But that, you know, Q and A would sometimes spit out the right answer, sometimes not. Sometimes we're just querying the data in the wrong way. And over, like, a year, I think we only made slight improvements in the quality, and all of them were driven by some customer escalations. But we internally did not have a good sense as a team. Is this product good? Like, is this going to meet product market fit? And how are customers actually feeling when they use it? Now, fast forward to today, we have evals baked into everything we ship. And so if we were rebuilding this today, we would actually run evals before we even ship it. And the evals would be based on even if synthetic data, it would be based on thousands of data points. It would have clear objective criteria, and we would have scores. So we would say, okay. This feature scores a five on accuracy, only a four on relevancy, a three on groundedness, and and so on. Right? So we know the quality before shipping. We know we can do some improvements based on that data. And then when customers start using, we can constantly measure that and see if it's tracking to what we, you know, saw in the lab. And And so now we have a complete if you ask me, like, what the quality of this is, I can actually objectively tell you on five criteria, this feature scores high or low, and I know exactly what three things I need to do to increase quality. So that's been a huge shift in the quality mindset. So it's almost like, you know, we were kind of trying to buy a self driving car without really having any notion of how good that car is on a real road. And and now before actually making the purchase, I actually see the clear performance indicators of how it does on the freeway, how it performs on roads, how it does off roading. Like, everything is measured, and I can kinda see, you know, between three cars, I can compare those scores and pick my option. Can you so just This heard myself then twice, so I'm can you summarize again? Just give me the the. I KPIs. What are the key because I wanna do two things. One is at the feature level that you just talked about. Then also if you had to summarize, like, what are your major KPIs that you care about day in and day out, weekly basis, quarterly basis, what are those KPIs? So I just want everyone to be really clear. Fit when you're releasing features, what are the KPIs and then what are your overall KPIs? And then there's a couple of q and a's that are coming. I wanna jump to them as well. Yeah. I mean, so I would just take a step back and say that the KPIs that we measure from a product standpoint don't change because of AI. Like, we still measure, you know, customer success in terms of, like, you know, are they getting to that magic moment? And so if in your product, the magic moment is conversion, then we do measure conversions. Yeah, we do measure MAU and DAO and the ratio between DAO. and MAU to see the engagement. I think those things haven't changed. But what has changed is the additional focus on, two things, and those are velocity, which is kind of time to action, as I call it. And and the time to action is more about, like, you know, how fast are we taking a product to market and then how fast after that are we acting on the insights to improve that product. And then time. and then the second one is quality, which is, you know, in the future, everything that will be, used, is going to or like when customers make decisions on what they use versus not, they will be looking at it from a quality lens because a lot of the, same kinds of capabilities will be built by different vendors. And so quality is really important and and that's that's PKPI that we now track through evals, and and to other approaches. So those are the two additional ones that we've kind of added as leading indicators to how well we are functioning as a team and what we're shipping. Yep. Awesome. Great. Thank you for summarising that. And they haven't they haven't changed as a result of AI, but I'm assuming the metrics themselves have changed. You've already discussed velocity time to market has been accelerated significantly from six months down to six weeks for some features, right? Yeah. Absolutely. There is a huge shift here. Awesome! Now, I want to take a couple of questions from the from the audience as well. So if you guys have other questions, please make sure you jump in. Hello to Kate who's in Hampshire. That's a great part of the world. I lived there at one point in my life. It's, apart from the weather, it's, not too bad. Sarah is asking if a team could automate one manual workflow this quarter, what should it be? That's a good question. Thing I would I would instead of giving a direct answer, I would say, look at kind of do an audit of, of how you work and and think about where the key bottlenecks are. You can start with a very quick survey of the team and collect data that way, or you can actually do observation of your own workflow or somebody else's workflow and really try to get to the bottom of what is it that takes the most time, because that's the lowest hanging fruit that you can pluck, and you can just think about how to put AI in that step. The other thing I would say is, you know, think about not in terms of automation, but think about in terms of in terms of actually getting a better outcome. Right? And so. the automation could be a trap where you might get sucked into the technical details of how do I automate something, how do I create, like, an agent or a workflow, and it might delay your, you know, light bulb moment. But if you just think about how can I, improve the outcome that I've previously had on this step, Be it the quality of the, you know, PRDs that you're writing based on customer feedback or be it, like, product analytics, and it just takes you lot of pain and trouble to write queries and takes weeks of your time? You're doing weekly business reviews and Sunday nights you're sitting and racking your brain. I used to work at Amazon and like every Monday we had to send a weekly business review doc, which was read by executives and it was a huge pain to put together because you have to analyze and write. Like, are there things that are just, like, kind of the bane of your existence right now and that you can improve the outcome while freeing up your time? Like, think about it in those terms as well. So I I would say start from those two, an audit of your team or your workflow and find a bottleneck. And then the second one would be, what are the things that you think you can improve an outcome by just using it? As part of that learning, you mentioned earlier that you got you share with each other different ways in which you leverage AI. Is that part of just a weekly team meeting that you allocate time towards, or is it a special meeting and an innovation type meeting that you have that is really an open agenda, which is like who's bringing new ideas to the meeting to help others, you know, change the way they work. Yeah. Good question. I have seen teams be very disciplined about this and have, like, you know, weekly brown bags and, like, learning sessions. I would say that the technology is moving so fast and there's so much it's like, you know, a race right now. And if you're in big tech, you're part of that race. So I don't think my team has a luxury to really spend an hour each week to really just deliberately learn. But what. we is we have taken moments in our team meetings to kind of acknowledge that this change is afoot. And, you know, all of our jobs are changing, and we have to embrace that change. And in those moments, kind of start sharing. Secondly, just on a daily basis as we see things, we just share them on like, you know, chat, and and we just say, hey. We found this, like, incredible way of doing x y z, and here's an example. The documentation example I shared was one of those moments. Similarly, somebody shared that they, you know, created a competitive intelligence kind of, you know, dashboard and app using, kind of Google Gemini. And now we have an always on twenty four seven view of what are what the competitive market looks like, how the market share is moving, and so on. So it's just like, you know, these moments that we are taking off, like, acknowledging how things are changing and just sharing in those moments, as well. as this constant stream of async sharing that's helping versus actually taking this blocking time. Because at some point, that becomes taxing and becomes another meeting on your calendar. Okay. Yeah. Cool. So it's it's not like a push down formal thing. You're really like it's just part of knowledge sharing, you're doing it informally as part of conversation. Absolutely. That's part of what you work. Interesting. A couple of questions coming in. So Sarah, thanks for that question. Graham has a question, can you clarify exactly how and what tools you are doing for your evaluations? Yeah. That's a great question too. There's a lot of, like, tools and frameworks, some of which are open source. There's Ragas and whatnot. There's no dearth of that, but I think one of the other types with AI is that, oh, you have to adopt a tool to really, like, transform the way you work. We, on the other hand, have not adopted any tool when it comes to EVALs. The thing that, is key for an EVAL is to think about when you're creating, like, a PRD, what are the kinds of criteria that you would measure the quality of this feature or product on and define that criteria and define those criteria upfront. And then when you actually do some prototyping, try to see if the prompts are scoring high or low on those criteria. Then think about what kind of test prompts would I need, what kind of test data would I need. Once you have that, then you can actually start a little bit of that cultural change of, oh, I actually, I'm not just expecting engineering to ship this and just, like, instrument it the right telemetry so I can measure usage. I'm also expecting them to do a little bit of more of testing on these criteria that I've defined. And I've created the dataset. Creating datasets now is so easy. You can just go on chat GPD and give it the parameters and can spew out thousands of records instantly for you. So I would start small. I would start with, you know, how can you bring in the actual mindset and the the the process of running an eval without worrying about tools, and then start scaling it slowly. Like, the example of what I did next was, using Replit, I created an eval dashboard, an eval tool that would basically take standard inputs as, hey, you're starting a new feature, give me the five rubrics that you're measuring. It will take that, then it will take, you know, a prompt from you and then it will generate the synthetic data that you would need to run these evals, would run it, and then show you the scores. And then you can run it as many times as you want. It will track the history of what you've done. So I created this whole app using Replit, and that was my next kind of step in terms of EVA maturity. I created my own tool because I saw value in taking that next step when I was ready. Mhmm. And now we've come to a point where engineering has created better tools internally where they are able to run the valves at scale and make this more of repeatable process. So I, as a product manager, don't have to think as much because now my team has this, you know, supply chain, for doing this repeatedly. So you will get through those stages of eval maturity in the end, but don't start from the two. Start from how did I do eval for this little thing I did. that's where how I was gonna summarize it. You just summarized it for me. It's like start with the output and the outcome of what you wanna achieve and then work back to the tool. Don't start with the tool and then adjust. what it is that you want to learn. So that's, that's really good feedback. If guys, if you have any other questions, let us know. Nice stab, Kate, about the snow and ice in Boston. Appreciate that. That's a fair play. But if there are any other questions or comments, let us know. I'm gonna take another one. And this one is and also let us know how you're going, check-in, say hello to each other. Like, it's nice to know that we're not just talking heads, we're not AI, we're real humans, doing our best to to try and share some knowledge today. So if there's anything we can help with apart from how bad my accent is, please let me know. How do you evaluate or identify value generated through AI adoption implementation? Any frameworks that you might use? Yeah. That's a good question. It kinda comes back a little bit. I I well, I'll I'll let I'll let you answer first. We'll try. Yeah. Go No. I was just gonna say. it comes back to, like, we talked about this a little bit earlier. Like, you talked it's about how we're doing it's part of work. It's not like, you know, you you've seen an improvement in the efficiency and what you've delivered, but you haven't it's not like you're measuring productivity as part of the AI. Yeah. Yeah. Can has it changed has it changed your frameworks or your operating model? yeah? I think, I was just gonna say more philosophically than objectively, I think you start feeling it and and you know how much you're getting done. You know that you're being able to get things, you know, done in a better way than you were before. You kinda just start feeling that change. Mhmm. In a large, way, I don't actually even care how my team is doing the work as long as they're doing the work and the outcome is great and customer feels happy. Right? So lot of what they're using AI for is abstracted away for me. Only each individual on my team know what their workflow looks like. Right? So it's like think about it more as is it improving your own life first? Are you freeing up some time for yourself or maybe with the same time you're able to take on more scope and deliver more impact? And if that those two things are true, then you're already winning. You're already succeeding. I don't think we have tried to put any kind of frameworks or any kind of, like, you know, systematic objective ways to really measure this, and. that's by design. Because at this stage, we do want everybody to be creative about how they use AI and and to deliver great outcomes. And there's already enough forcing functions for everybody to just, you know, be more productive and be more, get get more scope take on more scope. And so that's my answer is more philosophical. I think it doesn't help in terms of, creating something, you know, in terms of a process or a framework. Yeah. I think and I think I mean, whilst there's a there is a human cost to this, but one of the things that you mentioned earlier is that there are less human resources being allocated to your product now and and Roland was saying one top measure is the cost of the project before and after. You would presume now actually you're probably more cost efficient than you were before even though there's an implication to people, but also what you said it means hopefully, even though you have less people, you still got a quality of life and a work life balance as a result of that AI. Like, you're not now not overloaded with work. You can still have quality in the work that you produce. Absolutely. Yep. Good question. And and when you I think we're nearly out of time, so fire in these questions, and and let us know and keep the chat going. Which parts of the product analytics workflow benefit most from AI automation without losing human? Yeah. So, actually, let's unpack product analytics a little bit, right, before we get into it. So analytics really starts with a problem for which you develop a hypothesis, and then you start to actually analyze the data to validate or invalidate hypothesis and find the reason why. And that includes querying it, analyzing it, and then getting the insight and then forming an action plan. So if I think about this workflow that, you know, we think of as product analytics, AI really compresses the middle of that workflow, Mhmm. right, where you it democratizes significantly how you access the data. You don't have to be a SQL, expert anymore. It, shows you interesting themes and patterns without having to spend a lot of time, and in some cases, even themes and patterns you would have missed. And then it helps you kind of very quickly validate or invalidate some of the hypotheses that. you have. What it doesn't do is it doesn't close that loop yet. It doesn't like just look at a problem and just, you know, because it knows your customers and it knows the history and knows how you build your product, comes up with the right hypothesis out of box. It's almost like asking a consultant who doesn't know much, you know, about your condition and you're basically saying, oh, I'm seeing, like, you know, conversions drop by x percent in South Africa, but my conversions are up across the board, you know, around the world. Like, what could be happening? And that person doesn't even know, like, what your market is like in South Africa and the fact that there are smaller businesses there while the rest of the world has, you know, less competition, you know, more concentration. Like, it wouldn't know those kinds of trends. Right? You as a product manager know those trends. So you can come up with a with a hypothesis and then actually start to go on a kind of a a hunting mission. Mhmm. Yep. So I I would compare, like, you know, AI and product analytics to a really good meta detector, and it beeps loudly when there's problems, and it tells you there's some interesting patterns. But you still have to do the hard work of going on a fact finding mission, to really see, you know, where the treasure is and where the bottle caps are and to separate the signal from noise and that process of, like, using human judgment and the human creativity on problem solving and then formulating the action plan based on what you're finding. That AI does not actually take over. That's a really good summary. In in a lot of interviews I've had with customers around the change that that they've seen in leveraging AI for product analytics, I hear them say that it's basically they get to a faster So they're not spending as much time wrestling with data and trying to work out if they can trust it. There's more of like, that's interesting, because the AI can surface some things up and then they can go on another hypothesis and go, right, what if it was this? And then it's like, okay. So. yeah we're not really there yet, like that context, it still requires the human but it definitely gives people now that faster like insight, time to insight, so then we can then go and further validate. Emma's made a good comment, she said, do you think now that there's less people in your team but you're being far more productive, you're at a lot greater risk to the company should you leave? I don't know if that's a question or a, Yeah. a compliment. I don't I don't know. But I think, my only my only comment here would be that, you know, there there are some, like, advantages of being in this in this in this time, that we are in. And and, and I would just kind of reframe, you know, your thinking to those, to those positives and and to the opportunity, right, versus get into the so my my my reason for mentioning that wasn't to demoralize people and say, oh, like, everybody's laying off, you know, people and your jobs at are at risk. My my reason was to reef to to use that example as a a success story that despite that having happened to our team, I think we are we have grown incredibly because of that forcing function, and we have grown for the better with each and every single person on the team will now be able to get, you know, 10 x better jobs in five years, when we are on the other side of the AI transformation and help companies, think through how they should transform their workforce and create that culture of learning and growth. And so I would just kind of think more about that opportunity. Well said. Well said. If there are, we we we like a good fast hard and fast webinar and don't I'm cautious of conscious of people's time, so we wanna make sure it's valuable. So please, if you have any other questions, now is the time to ask. I do have a quick one kind of final question on my side unless I come up with something very creative between now and the end, but, what's next? What's one thing coming in AI and product analytics that PM should really prepare for? Yeah. That's a great question. I mean, I talked a lot about Evals here, and I think I would I would continue harping on that. I would think philosophically if I just take a step back, when we think about our product experiences that we are currently working on, I don't know how long are real humans going to be actually using products. Let's just face it. Right? Like and maybe I'm, like, maybe too ahead of the curve and maybe this would never happen. So a little bit of speculative, thinking here, but that's what philosophy affords you. But think about it like fast forward five years when everybody here might be using an AI assistant to get their job done, an AI browser to book flight tickets. Right? And AI assistant to, you know, get through your, you know, let's say if you're a if you're a customer service agent and get to your queue of customer support tickets and AI has already installed like 80% of them before you even get them. So if that is the world that we're gonna live in, then some of these experiences that we are creating today are not necessarily gonna be for humans. They're either gonna be for, you know, AI agents, or they're just gonna be, you know, kind of like data, to AI. And if that's the case and if the world is kinda changing, then the only thing that would matter, would be quality of what you have in your experience or your product. Mhmm. And the higher the quality, the more likely it will be used, you know, in whatever workflow we have in the future, and the more likely you will get, you know, discoverability and kind of usage. Right? So if I think about it from that lens and if that becomes true, how do you ensure quality? Well, the there may be more tools in the next four years. Who knows? But right now, the best tool is EVaaS. And so as product managers, especially if you're working on AI capabilities, you have to really equip yourself with the skill of, like, doing EVAS, learning how to do them, not feeling blocked with tools, not feeling blocked with skill, but just doing it. It's okay if you fail five times. You will succeed the six time. But, you know, that's the key thing. Quality, quality, quality is what will drive use usage and adoption, and the only way to ensure quality is EVAS. That's a lot to unpack. No. There's there's something there that you said that's interesting to me at the moment largely because, what I what we do with my work is analyse, session experience analytics in sessions and product analytics and what we've increasingly recognized is that agent experiences are going to be increasingly important and so there was an acquisition of a company that essentially is looking at agent experiences. And and can you can the analytics provide analytics on the agent experience to the human or the agent? Which is like something that you try you have to sit there for a while sometimes and write it down as to that's the flow of how this thing's gonna work. You have an agent analyzing an agent that's engaging with a human or an agent. It's. Yeah. a very strange. world. Absolutely. But it's inevitable. And so yeah. Interesting. It'll be interesting times when we have robots. Right now, we only have software agents, But. then you have robots and armies of robots sitting and doing customer support instead of humans. So in many ways, they act and think like humans because they are equipped with the best models. Yeah. But because they don't have to have to show their face to anyone, nobody's the wiser on who was on the other end. But do robots actually operate systems like humans do? Do they have the same behavioral psychology of for example, we use this framework of hook hook right? Hook action trigger, or reward, sorry, which, you know, uses human psychology of how humans get motivated to perform an action. And then when they get rewarded, they are more likely to come back and do that action again. Do agents and robots operate the same psychology? So everything we know about product development and, you know, the first principle of product management could change in that world if that becomes I like it. true. There's another question here. Hang on. I don't want it to finish on that. That that was, yeah. We're getting down a a a different path, which is which is still fun. The expansion of this is really interesting because I think it's I was thinking through, like, a whole like, I've been in historical past around, like, DevOps movements and and and they sort of, like, that sort of culture, and then how that transitioned to other departments. Like, how how like almost a software engineering approach to market and a product go to market changed how accounting and finance teams operate. Are you seeing anything like that as well from from an AI agent standpoint with, with finance and accounting teams, like, outside of your traditional world, are you seeing them operate differently? So I'm not very close to finance and accounting, but, I think what's interesting so my my my wife was in, an an accountant and, a lot of what she did, you know, spending hours and she worked sometimes, like, sixteen hours a day during tax season. It it was just about, like, taking data from the from, you know, a given client and and just putting that in Excel and doing some kind of standard analysis, and they already have Google's macros and functions and Excel tools and analysis, which is spent so much time organizing all that data, you know, to get some kind of an answer at the end that would basically check a box in the audit kind of checklist that they have. Now with AI, I can't imagine a human having to go through all that because you basically put all of these documents in the rack, and then you just basically ask the AI, here's the checklist, go through that checklist and tell me you know, the company's financial health and x y z metrics that I I need to know. Right? And, yes, there's more upfront prep work because you're trying to figure out how to, like, you know, create that drag and, like, you know, prepare that, custom AI for, like, giving you the right answers, but that's being a fixed cost that you pay. And then you create distributable process. And then the same thing is happening in in finance where, a lot of the, I think the work is, still relationship oriented, but kind of the back end of the work was heavy in terms of analysis, and you had to spend your time doing the analysis. But that analysis can now happen twenty four seven while you're kind of in front of clients and, like, you know, driving those initiatives forward. So there's much less need for back office operations. As you see, finance in back office is huge. And and now with AI, does that back office start going away? But I'm closer to sales and customer service, and that's where I'm really seeing, that transformation happen where organizations were hungry to grow revenue in sales, but they never had the ability to just make a case for more headcount. Yeah. Yep. But they can start unlocking that revenue from 15% of my prospects. I could never have the capacity to chase down, but now, yeah, I can chase them down relentlessly. Or I have this customer segment, which was too small for me to justify the ROI. Now with AI costing pennies, I can justify that ROI and go off them as well. And so lots of examples where, say, with some customer service, I'm definitely seeing, them adopt AI. Fewer examples of where they have actually measured real business impact because a lot of them are still focused on productivity, but that will come as well. Awesome summary. Hey. It's been a great webinar. Hopefully, you enjoyed your time and thank you for being, so knowledgeable on this topic and taking all of these questions. It's not an easy task, particularly when we, chose to turn up together and see how we would go as two humans collaborating one to one for the first time. If there's any final questions, please, do jump in. And if not, you don't have any questions, please take a moment to thank Ayushman for, spending time with us to really dive into the future of AI and product analytics and, and evals. And, as we wrap and while you get a chance to see all these positive comments that are coming through, thank you, Emma. Appreciate it. And Montanerio. I'm not very good at pronouncing names as you can tell, but thank you everyone. The last question I have for you, I want you to finish this sentence. AI will dot, dot, dot, dot, dot. Disrupt. Sorry. That's the only thing that came to my mind. I love it. It's a, it's a, it's, and it already has, And I think, it's a it's a question I've asked, I have a podcast, Tech Seeking Human, and I've asked that question now to 55 people and not one person has given the same answer. And I think it's, I think it's incredible. Well, sooner or later, someone will give the same answer but it hasn't yet. Hey we really Kate, good one, we'll change our world forever! Actually why don't you guys throw in your as we leave, I want to see if well I still haven't had the same answer yet, I've been doing this now for four years and there's not one time I've actually had someone said do the ironing for me that's the most creative one I've had so far. Did someone say. cure cancer? That would be amazing. answer is actually that's one that I had, that I that was my answer so that's a good one. Because we want that would be the ideal right? Hopefully AI can cure us from things that really suffer in humanity and that's people suffering. awesome. So cancer would be good or a lot of the medical conditions would be really good. Any other final ones? I'm really keen now to see what these answers are. So now we're in junk time at the end of the webinar. Never have to manually enter data twice again from Emma is a good one. Making sure you can trust the data too. So get it and actually believe it to be true, that would be good. What do us humans do with this all the time? Oh yeah, what are we going to do with all the time? That's a pipe dream still, I'm not sure, like I've been talking about this for over a decade now despite all of the technology we don't seem to get any more time. We maybe we do, maybe we're getting a little bit more time back, and maybe we're not as stressed I don't know. Yeah any other final ones otherwise guys thank you so much Ayushman thank you it was lovely to meet you, I hope everyone enjoyed the the session. Kate is saying walk around the ponds and photograph birds. Love it! I got my Canon camera out the other day with long lens and really enjoyed it. I enjoyed being manual, I think there is a retro revival coming back too which should, be good. too. I'm on that board. Yeah you can probably see a record on my shoulder, I have a record player that's sitting behind me. Even though it's not good for the environment it's important for my kids to learn and that's. how music is actually produced as an album. Well it used to be anyway before Suna. Guys thank you so much, and could learn sarcasm, I'm very sarcastic. So thank you everyone for joining, hope you had a great time, love to connect with you all on LinkedIn and Ayushman thank you for your time as well. Yeah. Thank you for the great engagement to our audience as well. Thanks, folks. Thanks, everyone. Bye.