Video: Using AI to scale support without sacrificing quality: A CCO's view | Duration: 3108s | Summary: Using AI to scale support without sacrificing quality: A CCO's view | Chapters: Welcome and Introduction (0s), Welcome and Introduction (28.553s), Introducing Ironclad CLM (66.82s), Scaling Support Efficiently (157.095s), AI for Support (329.875s), Pilot Program Insights (616.82s), AI in Customer Support (836.235s), Build vs. Buy Decision (1118.375s), Measuring AI Impact (1394.48s), Unexpected AI Benefits (1928.765s), Future of AI (2285.92s), Concluding Remarks (2691.71s)
Transcript for "Using AI to scale support without sacrificing quality: A CCO's view": Collective. It is with great pleasure that I welcome you to today's session where our speakers will be talking about using AI to scale support without sacrificing quality. First up, we have Seth. And Seth is our cofounder and co CEO of Embrace dot ai, and he's joined by Rob, the Chief Customer Officer at I'm Cloud. And without further ado, I'm going to hand over to them. Enjoy today's session and I'll see you around. Thanks everyone. All right. Thank you, Nicole. Appreciate you bringing the group together here, and welcome everybody to this webinar. Some of you have met, many of you have not. Great to see you all today. Happy St. Patty's Day. Hopefully you're doing something fun after this. I'm Seth Halpern. I'm one of the two co founders of Embrace alongside my other co founder, Derek Butts. Embrace.ai is an agentic CX and support platform that is built for B2B businesses. And you'll hear more about it over the course of this discussion. Thanks for joining. Rob, would love to hear a little bit about yourself, your background, and and also Ironclad. Yep. Thanks, Seth, and happy Saint Patrick's Day to everyone. I saw someone joining from Dublin, so we can we can all celebrate Saint Patrick's Day here. So I'm a chief customer officer at Ironclad. I've been here I'm on my fourth year. I have a a, I say, a varied history in a lot of different customer facing roles, both pre and post sales. I currently run a team we call customer outcomes at Ironclad, which is our presale solution engineering team and technical services, professional services, customer success and our support team. So we've got a really great team. Our our goal at Ironclad, the reason my team exists is to make sure customers we get them the right solution, we help them adopt it and get business value. Just a little bit about Ironclad. We are our purpose is to accelerate business value with every contract. And if you're not familiar with CLM or contract life cycle management, at the highest level you can think every business out there has to do contracts, whether you're selling your services or goods, whether you're procuring goods. And we provide a solution that helps that workflow be more efficient and optimized for approvals, redlining, signature, and then provide a robust repository of your contracts where there's a lot of value in understanding what's in those contracts. So that's what Ironclad does at the highest level. And hopefully, have some customers out there that are joining. But really happy to be here. Awesome. Thanks for joining, Rob. Before I get into the questions, attendees should know you can pose questions, which we'll tackle along the way in the Q and A box on your screen. Feel free to post some, and we'll try and get to a group of those. Rob, to start with, maybe talk a little bit about this concept of scaling support without sacrificing quality. And how do you think about quality at Iron Cloud? What does that mean? Is it accuracy? Is it time to resolution? Is it customer confidence, risk avoidance, something else? How do you think about that concept? Yeah. For us, like, it's interesting because I I mentioned what Ironclad does. Our the personas we support are a lot of them are lawyers, like commercial counsels or or CLOs, within a business or legal operations professionals who are the operations arm of the contracting process. And so generally, I'd say they're more risk averse than maybe your average persona in a business, although certainly doesn't mean they don't evolve with technology. But for me, the the most important thing when we work with customers is make sure they understand and have accurate information so they can design their contracting process appropriately and they have confidence in the output, which is usually a signed contract. Okay. Thank you. Before you introduced AI, where were you feeling the greatest pressure around scaling? Was it, you know, ticket volumes, the complexity of issues the team was facing, you know, fragmented knowledge, you know, overall staffing, requests and expectations from your customers, or something else? Yeah. I think so it's interesting. We're a relatively fast growing company. We've been we're about a little over ten years. Iron Cloud's been in existence, and we're continuing to grow. We just went over $200,000,000 in revenue that was publicly announced. And so like when you think about and I'm sure a lot of you out there, if you're in a fast growing company, when you're trying to scale customer success and support as part of that, it's a real challenge because you can't just add bodies as you add customers. So for us, the biggest challenge is how do we scale. And in fact, we went through annual planning last year. We have a new CEO, Dan Springer, who joined and was really challenging all of us to say, how can you scale more efficiently? And a big part of that was thinking about AI for sure. But the problems we were trying to address was really, like, we're adding a a lot more customers. And our customer base, we've got over 2,000 customers now, but it goes all the way from what we would call mid market customers to large enterprises. And so there's a broad broad range of customers we're trying to support with a smaller number of people. So for me, like, the challenge is how how do we do that effectively and without sacrificing that quality and the aspect we talked about before. Yeah. It's an interesting dynamic that we hear across a lot of our customers and prospects, just this sort of dual dynamic of growth and then scaling through that growth and still providing great experience at the same time. Yeah. The only thing I would say is we we actually I felt like before AI, like, we were doing a good job. Our support team, you know, and our our scale customer experience team has done you know, I I come from a customer experience background. A lot of my vendors I've worked for were in that space. And so we have a a good help center. We have knowledge and content. We're trying to serve customers. We have a community. We're trying to do all those things. But even with all of that, as we ramp, we we we said, how can we, you know, do even more with less? Totally. Let's get additional context. What, what what was the trigger or the catalyst, for you all to embark on an AI initiative, you know, with with Embrace for, you know, your support, and customer experience organization? Was it you know, was there some sort of moments or internal directive or other dynamic? Yeah. I think I mean, there were there was a couple things, and and this is not if anybody's out there in a similar role, you're you're dealing with the same thing. But, you know, probably more than a year ago, I was just thinking of we had gone through planning, and I was like, how do we you know, what can we do here that there's a lot of great things going on in the market. What can we use? So so just in general, I was, like, trying to think ahead. I knew we're growing. One of the inspirations for me was we we do these events called Ironclad Local where we have it's like a day long event. We'll do them in New York or San Francisco. We've done one in Austin and different cities, Seattle. But we would set up these customer solution bars, we call them. And we would have, like, different events through the day. But in, like, in the common space, we'd have these little desks. We'd do, like, six of them. And people could book twenty or thirty minutes and talk to a CSM, one of our consultants, or different folks we would have staff this and come up with a problem and say, hey. I'm trying to write this report, or I'm trying to figure out this workflow issue. Can you give me some feedback? And the overwhelming sentiment from customers is like, we love this. And in my head, I was like, how do I take that experience and do it at scale and and do it well at scale? Like and so that's was really the inspiration to start really thinking about how do I use something like an Embrace tool to replicate that. Okay. Alright. Thank you. You know, can you unpack a little bit more how you thought about, you know, where to start? You know, we see companies often have a variety of different use cases, like within their customer organizations and beyond. So how did you really narrow down to a short list of AI centric use cases to begin with? Yeah. I mean, the the other interesting thing is we had deployed an internal AI agent on Slack from from a different vendor that people could just go in and ask, like, hey. You know, what are the limits on such and such within Ironclad? And you would get not necessarily a customer facing answer, but you'd get kind of an internal answer about what it was able to understand. And so in my head, I was like, alright. That's I wanna replicate that online. I want I want a place a customer can just go and ask this question and get good answer. Now what was clear to me was, like, that the the content was not needed to be curated. And then I said, alright. Well, I have this curated help center content. I have a really great learning experience design team. Bill Kelleher is the guy that runs that team with a great set of content. So let's start there. But what was interesting is I quickly realized, like, that it's it's a broader issue than just that. And we have a support team led by Aaron Skagen who is also replying to customers all day every day. And then I've got, this build team is also building new content. And so what was interesting was to start as we started looking at these solutions is realizing that all those things were interconnected, and it wasn't just about providing a a specific customer facing tool. So that was sort of an early learning I had. I was like, alright. I gotta think about this. And I actually had to get sort of, I started thinking about this, and I realized those teams weren't necessarily aligned with what I my vision was. And I realized I had to sort of say to all three of those kind of functions or things we're trying to do, this is how these things work together. I need you all to collaborate and make sure we can get to a solution, which they were allowed to do. But that that that's kinda where I had to start to think about which which of those things would be first. And what was kinda clear to me was, like, I could the support agent experience is one I could control with less risk, you know, with less customer facing risk, and that's where we ended up starting initially, was just getting support agents to use the tool. Totally. So building on that a little bit, you all, you know, in the case study IronPad published with Embrace, you talk about taking a pilot first approach, Yeah. right? So you had a sense of what you wanted to do, and then, you know, you tried it before, you know, actually like formally moving forward, you know, with this initiative. Can you talk a little bit about, you know, that pilot, like sort of like scope duration, like your success criteria, anything you'd share with the group? Yeah. Yeah. The I mean, we so what we end up we had support agents using the tool to help craft responses to to customers. That was kind of the initial way we could vet and say, alright. Is this thing giving us valid content? And we saw initially that people the agents were like, this is great because, one, you know, we we think about our inbound support tickets as in two categories. Like, we have a how to category, which is just like, how do you do this or how does this and it's just mapping, like, question to an answer. And that's a big chunk of probably around 50% of our questions are those types of things. So that's the obvious low hanging fruit. For a support agent, they know the answer. They can link to a help center article. But if they have something that just lets them write a response that's 90% there, they can tweak it. That was that was the initial part of, like, just betting, okay. Is this thing worth it? And, like, it was clear, like, yeah, this thing has some legs. We can move in here. And then what we did with the pilot was we have a program that Caitlin Wood runs our scaled customer experience team. We call it adoption accelerator program, but we'll pick a specific aspect of the product that we wanna drive people to start using more. And we'll do, like, a a combined digital and live session program that we call adoption accelerators. And so what we did was expose this customer facing tool just to the people participating in in a cohort of people in this adoption accelerator program. And that allowed us to get some early real customer usage on the, AgenTic self-service solution with with feedback that we felt was low risk because we could control the the cohort of people using it. And in terms of success criteria, you know, you know, I'd like to say there was, like, some number that we just said if we hit this number, like, you know, we know it's successful. I think the reality in my head was, like, one, I don't have any customers saying, like, what the heck? This is, like, not a good experience, and that certainly didn't happen. But also then, like, I wanted to see some positive. Like, yes. This is getting me an answer. Like, are we actually replicating that solution bar experience that I mentioned? And that that's so that's a warm and fuzzy sort of success criteria. But it gave us enough validation that we want to try this at a bigger scale. Great. That's super helpful background. I see that there's a question or two in the chat. Sure. Ryan, And thank you very much for going first. Appreciate it. Let me let me just pose this, impromptu to you, Rob. What tasks have you seen AI handle poorly in CSM workflows, and how did you course correct? Yeah. I mean, what I would say, this agent is not supporting CSM in particular, like, the the the customer facing CSM role. What it's really allowing us to do is provide a couple things that just for clarity. One is we have an agentic self-service solution that our customers are using as well as like, our employees use it as well for customer facing answers. We have our support agents using it to help craft responses to customers, which works very well. Our support team also the managers in the support team also use it to QA responses, which is there's a lot of like, we have time, could talk about sort of unexpected benefits. That was one we did not anticipate at all. But it's allowed our support managers just to look at tickets and say, Did we do a good job on this or not? And then the other team that's using it extensively is our content team, Bill I mentioned Bill Keller's team. They use it to to help craft new content. And so there's an idea of I don't know. I think it's called knowledge gaps or something like that, Seth. But, essentially, it'll get the tool will say, you're missing content in this area. And we've identified something like fifth over a 100 areas of, like, where we wanna create new content, and it'll actually I think it's called suggested content, but it'll actually suggest content that could help serve questions better. And so that will happen. So all that being said, like, that's where we're we're using it. In terms of poorly, what we we definitely see, there's this kind of limit. Right? And if you think about our software in particular, you're building workflows. There's complexity to specific situations. And I think there's a limit at which the the the content and the AI agent can really serve up and help a customer really get to a final solution. There may be times where a human needs to be involved. And so one of the things we're trying to figure out is how do we trigger that escalation point in a more seamless manner. So that's where I would say if we think about poorly or maybe the limits, that's one of the things we definitely know that, like, we're still there's still this path in the next sprint. There's another question in the chat, which I know is on the minds of a lot of chief customer officers, support leaders. You know, Melody posed this question, Relative to your team, interested in how you introduce this to your team, the employee experience and impact, and also, are you communicating how you're leveraging AI? What has been their reaction? Yeah. I mean, it's a good really good question. A couple things. One, like, as a company, like, any company right now, our leadership is very much saying, like, how how can you well, what AI tools are you using, and how can you use them to be better at your job? Our our CEO down is, like, challenging everyone to do that. So we have other AI tools we're using internally. But the biggest, I would say, a people standpoint, there was a legitimate concern from our support team in particular. Like, is this gonna take my job? And we were very mindful of that. And what I would say is we as we went through that pilot and we assessed, you know, how people were using it, I think, one, helped to give them confidence that it allowed them to do their job better. And, transparently, the way like, you know, and hopefully, this is helpful, least, to some of you. As we went through planning for this fiscal year that we're in now, a big part of what we did was say, okay. If we scale at this rate, if we add this many customers next year and we grow at this rate, how many people would we need in our current staffing models? And we said we need this many people. And we said, alright. Let's make a bet with efficiency gains with and these types of tools are a big part of that. We can get this much more efficient. What then what does that allow us to reduce? And so very much I try to be transparent that, like, what this means is we actually we're we're we can scale with less people in the future, but it doesn't we're not trying to impact people's jobs. And the other thing I really try to position, and I'm continuing to do this, is that if, let's say, somebody's we now have less of a need for somebody to answer how to tickets. Well, rather than just saying, Well, we don't need that person anymore, how do we pivot that person to provide other value add to our customers where where face to face time or human time is important? So we're we have sort of enhanced support offerings that we sell today, I and wanna continue to grow that program for our largest customers so that I can pivot some of the capacity that I have today into actually selling selling more to to customers with those people. So that that that's the way I'm trying to think about it. I'll I'll acknowledge, though. Like, if you're in a different place in your business where you're not growing quickly or you're just under pressure to show cost savings, that may it might be a different conversation. Right? And so, like, I'm I'm not suggesting that's true for everybody, but that's how I thought about it, that question. And then there's definitely just in terms of confidence in the tool, you have to people have to start to build confidence in the tool. And if you're getting good results and it's helping you, that's what is the biggest needle mover on that. Yeah. Okay. All right. Thanks, Rob. And appreciate people dropping questions into the chat. I'll continue to work through some of those as we go here. It's interesting. You know, at Embrace, we're talking to a lot of different companies continually, and there are some that are contemplating this build versus buy question. Yeah. You know, in house, you know, dev IT team, you know, build a solution, or do buy a packaged solution, which, you know, in the history of software has often been a, you know, a question across all sorts of domains? Can you talk a little bit about how you and Iron Cloud thought about build versus buy for this particular solution? Yeah. I mean, my head, it was never an option to try to build this. I think the I I've I've had this conversation on the other side many times, like, not just here, but other vendors where you're you know, they're like, we could we we could could build this thing. And, like, unless it's your core competency, as a company, you know, I I generally feel like that that the buy motion is gonna be better. But, you know, obviously, that's not a a hard and fast rule. For the in this situation, like, the bigger challenge there is, like, you can think about the in product experience versus out of product experience. And so, you know and we had this conversation with Embrace, right, in terms of, like, we're evaluating things like because there are cases where you can take the Embrace solution or you can take other AI solutions and embed them inside your product experience. And that's true of, like, other support CRMs as well. We very much said, like, we're not gonna fight that battle, so to speak, initially. We wanna, you know, prove this out from a from our customer, the different services, like our academy, our help center, the support experience. But but the reality is you could go to any engineering team, and they're like, well, we could build that. Like like like and we're building AI anyway to do things in the product, so why don't we just add that in? I think for us, the the the key thing was the integration you had to to our Zendesk solution was a key way we could stand this up. And it allowed us and if you think about the Build versus Buy, we could we stood this up really in weeks, right, and had a proof point very quickly. And so to me, was a pretty obvious solution. But, yes. But, you know, again, that's that's because we're trying to move fast and, you know, we're trying to prove it out quickly. So Can you talk a little bit more about your decision making criteria as thought about making a vendor decision? And anything you can share with the group in terms of, you know, choosing Embrace and, you know, why you selected us. Yeah. I mean I mean, from a from a technical standpoint, I think the the the tight the Zendesk integration out of the box was a big help for us just because we could very quickly stand it up, and it was a pretty I would say not even pretty. It was a seamless experience as well as the ability to quickly create that agent on our our portal and have it right out of the box, you know, be functional. That that was a huge element. I think the other was we we could iterate very quickly with your team on understanding the solution. You know, I I was talking to Bill earlier, Bill, who who helped roll this out, and, you know, we had to quickly you know, one of the things you're, of course, worried about is, like, what is the agent actually telling customers. Right? And so, like and there's a there's a way, you know, how does it express things and how does it handle certain situations. And so we had we quickly learned, like, in I think we had to give it some some tweak it a little bit to make sure it wasn't apologetic, you know, that they we just wanna be very clear about here's what's true. Here's what's not true. Here are your options. So so so but the ability to, like, work with your team on that was really helpful. And I I also think, like and this this is maybe a soapbox moment for me, so just spare me for a moment. But, like, when I when we had our SKO a couple weeks ago, and I had a keynote as part of that for our internal Iron Cloud team. And I talked to this idea of, like, meeting our customer's head, their brain, or their heart. Right? And we have to sort of do both in the sense that, like, we have to be tell our customers factual information. We have to help them achieve certain concrete things. But if we're gonna be successful as a business, we have to connect to to their their heart, meaning we have to lay at a a relationship level, on a personal level, we have to know have to build trust. We have to build, connection. And I think that's you know, I think kudos to your team seven, the sense, like, that's always been very true in this like, we're solving problems, but it's a very collaborative effort. And, you know, I don't take that for granted in a vendor, so that was a big part of it Okay, thanks for sharing. I'll pose another question from the chat, this one from Dave Osler. I have a question. Rob mentioned something earlier that surprised me. You were talking about your growth to 200,000,000 and how you can't just add people to cover the growth. I'd like to learn more about what you meant. Our organization feels adding people handles growth. Okay. I mean yeah. I mean, obviously, at some point, you're adding people no matter what. I think what I I to be more precise, like, you we can't add if we if we if let's say we have, you know, a 100 customers and you have five CSMs. If you go to 200 customers, does that mean you go to 10 CSMs? If you go to 300, does that mean 15? At some point, you can't scale linearly as a company. And that's that's really what I meant in the sense that, like, we have when you look at the back end financial metrics of how you run your business, you're trying to meet certain targets. And so you have to think about ways to scale more effectively. It of course, it's very dependent on where you are in your stage. It's very dependent on what your financial goals are. There's lots of factors. So don't take what I'm saying as, like, universal to anybody. But that's the situation we were in in terms of how we're thinking about it. Okay. Another one from the chat, this one from from Tim. So operating model have you seen work best? AI as deflection, AI as agent assist, or AI as a proactive experience layer? we so I'll I'll try to answer that question indirectly, but I but I but I think the the thing I would say surprised me the most out of this whole experience was actually our the impact this had on our content and those knowledge gaps. Like, that was the one that our content team said we're now building content 50% faster than we were before, which is huge. Right? Because that's like a you talk about scaling. Like, that's the team we're not scaling with the business, but they provide a massive amount of value. So that's one thing. And that was just through and I don't know if you would categorize that as proactive in the sense that we're now looking at opportunities to build content to serve customers better. The we definitely saw on the agent assist side, by meaning but what I mean by that is our support agents, like, a 50 like, I think a 40% gain in the how to tickets, like, quickly they can craft responses to those. And I would and then I guess I would say, like, because I've been reading about this lately, like, as CS leaders, there's this idea that AI is just an efficiency tool. But that actually the real value and this maybe goes set to your maturity curve. The value is then how does this how do the agents start to actually do more operational work over time? And for sure, where we're at, in my opinion, is we're at the efficiency efficiency level. Like, that's our biggest value has been helping serve customers. We're we're doing hundreds of of these conversations a week with our customers with this AI agent. And I you asked yourself, like, either either that person would be contacting support or they just wouldn't get their question answered. Right? And so it's hard to measure which of those would be true. But, like, we're definitely serving customers better more efficiently is what I would say where we're at right now. So I don't know. Okay. Hopefully, that answers your question. Yep. That's helpful. Anything you've spoken a couple times about the kind of knowledge and content piece, which underpins support experience, an AI driven support experience in a lot of ways. Is there anything else that you'd share around, like, just in terms of, like, your overall knowledge basis and knowledge content, how you manage like curation, freshness, tone. We talk to a lot of companies that well, pretty much every company we talk to feels that their knowledge bases are incomplete, their content is messy. Yeah. And and so, like, how do you think about that and and make sure that, you know, the AI experience, like, remains really trustworthy? Yeah. I mean, our team again, it's a really Bill this Bill's team is very small relative to the to the content they're they're supporting. They have a system, and I couldn't tell you the details on it of how, like, how they try to make validate content, particularly when we're releasing new capabilities, making sure it's accurate. But that's, again, where AI is the obvious answer to, like, help assess those things. And then we also you know, it it's great at looking at content and suggesting a a better format or, like you said, tone or things like that of of how we do it. And that's where, like again, I was super surprised. I like, out of left field, I didn't think about, like, the impact it would have on that team. And he again, the you know, he said to me, like, this we're basically cut in half the time it takes to create good content. The other value here was that we we saw the knowledge gaps. Like, we quickly discovered an integration to one of our main main integrations that we support. We had so many questions about that and trying to figure it out, and we realized we needed better content there. It it it's not that it wasn't working, but it was just it's a big part of what customers are trying to do. And so we realized we had to curate and improve the content there. And Bill was able and his team was able to tweak the agent to be able to respond to those more effectively. Okay. All right. Thank you. I know that measurement, ROI, it's definitely on the minds of our customers and our prospects. You know, and I think for many companies, there's a lot of focus on making spend decisions and aligning them with benefits and achieving those benefits. Can you talk a little bit about, I would say, both, like, quantitative and qualitative measurements? You know, how, like, how you all thought about ROI? You know, anything you'd share in terms of, like, actually achieved ROI? You published some of this in the case study, but anything. you'd hit here. Yeah. I mean, at the at the end of the day, like, the the hard the hard reality is, like, the the ROI manifested in our annual planning, which was like, okay. We think we need this many bodies to scale next year. And if we're gonna make these bets on on efficiency gains, we we think we can do with this many people, which is was less, quite obviously. So so that like, that's how it like, in practical reality, it manifested. I think ROI sometimes it's they're useful, but but it's not like my boss had came to me and said prove to me you achieved this ROI, you know, on the spend. So I think they're most useful in, like when we were doing those planning or making big decisions, like, you're holding yourself accountable to, like, okay. Can we achieve the things we want? It could be, like, we're gonna go it could be if you're the ROI were not revolve around more revenue, you're making a bet. Like, I can go sell more because of whatever I'm doing. So so I think, like, in any case, I think that that's that's how I think about it. But it manifested in, like, our ability to create content, our ability to service support tickets that come in more effectively, more efficiently. And the the qualitative aspect was really about, like, okay. We have this we now can see this history of all these prompts that customers ask us. Are those prompts positive? Like, are people getting the answers they need? One thing I would say, you know, just that that that I think we'll probably talk maybe about future steps. But today, we can't when if somebody's using our agent on the portal, we don't know who that person is. Like, and it's not because we because of this technical capabilities, we just haven't done the work to say, like, is this person logged in authenticated? The but in general, when we look at the the the tone of those questions, like, they're overwhelmingly positive. So that's where the on the qualitative side, like, it was important to me that we're actually serving up really good content. And and that like, at the end of the day, like, I still fundamentally believe, like, if we service our customers in a good way, like, that's how we're gonna be successful as a business. And that this that that that when you see those conversations, it's like a a real manifestation of, like, is that experience true or not? Okay. Alright. Thanks, Rob. And, you know, for the for the group here, I just posted a link to the Ironclad ROI case study in the chat. You can just link through and access the PDF that way to see more detail. You know, you touched a little bit, you know, on sort of like, you know, budgeting and so on. There's a question in the chat, from Tarrou. Hopefully. I'm pronouncing the name right. Piggybacking on the question about growth, how is AI budgeted for the CS organization? It's so there's no for me, personally, like, there's no special AI budget. Like, as you go through planning, we have technology spend and, you know, you're placing your bets. We we we have a couple other AI solutions we've used as part of our support, or CS strategy, and this is part of that. But, again, like, you can think about it as, like, you've got a finite number of dollars to go spend. So where are you gonna spend it to to achieve your business goals? And so, yeah, fundamentally, like, you you're that's the trade off in my head of, like, what what are you know, where are we investing that money? Okay. Hopefully, that helps. Yeah. Well, you know, coming back to the ROI piece and something that you mentioned earlier in this conversation, you referenced unexpected benefits, Yeah. right? So there were things that like you embarked on this initiative to bring into reality, and then there were some things that you found, you know, sort of post go live that, you know, you hadn't banked on. Can you share a little bit about some of those unexpected benefits? Yeah. I think one of them was around the QA aspect. Like, Erin Skagen, who who is a great support team, but she she was she because we actually we started about more than a year ago. We have a partner we use to to provide 24 by seven support. So so people that aren't that are subcontractors essentially for for Iron Cloud, it was important for us to be able to validate, okay, are they providing the same level of responses and support to our customers as our our internal employees? But also for internal employees, how do we QA it? So that was one aspect. It was, like, using our that agent, it could assess any given interaction and provide recommendations. A pleasant one for me was, like, I I was looking at the prompts, and I so all of a sudden, I see, you know, and somebody's asking for the response in German, and and it just would provide the the response translated. And that was a total surprise to me. Like and we obviously we're we have international customers. It's a challenge for any company as you scale globally to have language talent. So And that was definitely one I didn't anticipate at all and was really great to see and is gonna help us in our international markets. I think I'm trying to get other things that were pleasant surprises. Yeah. I I think just, again, seeing how people interact, like, there's definitely conversation where you can see people get it, and they treat it like almost like I'm talking to a consultant from Ironclad, and they just ask you questions and follow on questions. You never really know how people are gonna adopt. And everyone's at a different place with how they use AI agents in their comfort level. But it was really nice to see that as well. People realize, okay. This is actually, like, not a full time employee for me, but it's somebody I can call at the at any time. And that that was that original vision, you go back to the solution bar, is is what I was hopeful for. So Okay. The you know, your your comment about, you know, the like, the German prompts was timely. There's a a question in chat from Tork about, you know, really like global impacts. And the question is, you know, are there countries or there languages that maybe are less accepting, you know, where the AI is, you know, less effective? Have there been any observations on that front? Yeah. To be honest, I I can't answer that question. It's a it's a really good question to understand. And I think that goes to, like when when I think about what I wanna do in the future here, one is I want the the what's the actually, the other unexpected thing I just is, like, the data the value of the data of the prompts is so remarkable. When you when you have this library of questions people are asking you, not just, like, in a single sentence, but this full interaction, it's really valuable product data for our EPD team. We can use AI to assess what are the trends and, like, there's a great dashboard and the tool to be able to to look at this stuff. But it really helps us understand what our customers need and where the where there's unmet needs, whether that and whether that we're meeting that through additional content or whether that's a product gap, that's a huge benefit. But but I for sure like, today, we don't have that data fed into our internal data warehouse, and that's something that we definitely wanna be able to do because it's a gold mine, frankly. And what will make it even a platinum mine is is once I can correlate the conversation to who's asking it, and now I know, okay. If people in mid market segment are asking about this or from such and such country are asking this, that that's really gonna unlock a lot more. And so, like, there's there's so much more we want to do that we just we we've gotta work out some internal identity management stuff, but it's definitely something we wanna figure out. But, yeah, I I don't have a bit of insight yet on that one. Okay. Yeah. Thanks for sharing all that. I mean, it's you know, I think like many of our our customers, that ability to use our insights capability to understand common topics, like, can be a real light bulb moment in terms of what people are asking about, which can drive knowledge, content, training, feedback to product, lots of lots of other things. Yeah. And, you know, as far as. I I see another the question on CSAT before and after, and I was I just thought I could quickly address that one too from from Sarah. I mean, the the answer is, like, we haven't seen a material CSAT impact. We we measure CSAT after our support experience. So that's what I'm specifically talking about, which was interesting because I really didn't know what we were gonna see. But I think one well, like, we didn't have bad CSAT to start. Like, our team was doing a good job. So so I would say that we we we certainly haven't seen a negative drop. But it's not like we saw this material jump that suddenly, like, customers are that much happier. That's on the support experience. Like, if somebody's opening a ticket with us and we're replying back. And so so that was interesting to see because I thought that potentially we would see see it. But I think we were doing a reasonable job before. I think it's allowing us to do that at scale better. And then the one there is a bit of a gap, I would say, in the CSAT of the actual agentic experience. We have like, people can leave feedback there. We we don't get a lot of that, but we don't have a a very reliable way other than assessing via AI the the tone of the interaction, which is overwhelmingly positive. But we don't have a, like, a specific CSAT measurement for that self-service experience today. Okay. I'd like you to look into the future a little bit and think about like the next, I don't know, call it twelve to twenty four months. Yeah. How do you think about the balance between technology and humans in your domain? And what advice would you share with peer you know, a peer called CCO support leader who's, you know, thinking about starting with an AI initiative? Yeah. So I think yeah. One is just like the and everyone's on this journey. Right? So I I don't assume to have all the answers here for sure. But, like, I think the pilot approach is helpful. And even as I think about new things we wanna do, we we we we don't want to we wanna be bold, but we wanna be smart about how we're bold is how I would say it. And I think, you know, there was a good post yesterday I I saw where, like, talking about CS leaders, like, thinking of AI as just efficiency gains. And that's you've heard me talk pretty much for the last thirty minutes about efficiency gains. But that like, if you just think that, you're missing the point. And I I don't think I'm missing the point. I think I I do understand the potential in the future here of of what is possible. But the efficiency gains are an easy way to to get initial traction to you know, looking at parts of your business where, like, that's that's what, you know, a win you can get. And the question is then how do you build on that? And, like, for me, you know, I I could see a a and so then the idea is how do you start to operationalize? And that goes back to your maturity curve, Seth, of, like, how do you how does this agent become an operational part of your business, not just providing content or information? And there's definitely tasks today that customers may wanna do that they could ask this agent to do, and then we could help them get that done, whether that's in the product, whether that's, like, internally, logistically, administratively with our company, other things we could help do. So so there's, like, a lot of untapped potential. But even in that lens, I would still think about, alright. What are the things that are low value from a human standpoint that that we know this automation could drive? And then let's tackle those. And, like, I'll go back to that, like, the head versus the heart thing. Like, I I I don't wanna lose this idea that we connect with our customers at a human level, and we have to continue making them feel that that experience. But I do think customers are more than happy to take the the the automation path or the the bot the AI path they see low you know, they know it's a a routine task or they're getting information. So so, anyway, I think that's but but I I also sympathize. Like, anybody trying to figure this stuff out, there's so many vendors. Things are changing so rapidly. And that's why I do think you have to be very discreet in, like, the problems you're trying to solve and move through it so that you can have some adaptability. Well, you know, sort of on this theme of, like, starting somewhere and it you know, advancing on a journey. You've referenced a couple times this notion of an AI maturity model, and. I'll put it in chat so people can look at it directly. But we've recently published a piece on this notion of an AI maturity model because many companies are thinking about how to start, where to start, what does the progression look like over time. And we've identified three phases. One is answer automation, one is around operational optimization, and one is really about like revenue and, you know, retention and expansion. Can you talk a little bit about where you see, you know, ironclad on this journey, this maturity model journey? Yeah. I mean, the we're clear like, I think we definitely got phase one done in the sense that we're doing self-service. We have agent assist, let's call it, like, in for our support agents, helping them work with customers more effectively. And that that, for sure, I'm I'm confident that's reduced, like, our in our ticket volume. And and I think it's also my theory, and I can't really prove with data, is that customers are engage customers are engaging that would otherwise just try to figure it out on their own and because it's so helpful. Right? And then I do think we started to think about, like, how do we then move into this phase two into your model? Like, how do we start to think about workflow and cost management? For us, like, the the the phase three aspect, the revenue is definitely not something we're trying to do today, today. But you can definitely see where how could I start to recommend, for example, like, oh, I see this problem you're trying I see the other aspect is think because I can't authenticate today. But if I could and I could go look and see what the entitlements are for this person that's asking a question, I could say, okay. Well, did you know that we actually have this offering that we would do this thing for you? And, you know, how would you like to proceed? Right? So that's something I haven't even really thought about how we would do that. But for sure, that's where you could start to see the this AI agent starting to revenue. So I you know, I'm I'm always, like, cautiously optimistic. So I think we're we're clearly in phase one, but I think we see I see the opportunities to help us move up the curve. Okay, great. Yeah, I would We're coming to the tail end of the prepared questions we had, and I've got one more from the chat that I'll share, but if folks have additional questions, please do drop them into the Q and A. Know, is there anything you're doing as it relates to, onboarding and or training, with AI? Tom that's a question from Tom. Let me see. Are you looking to improve onboarding and or training, with AI? So so, Lucinda, so yeah. Like, for sure, this is, like, a great tool. When you think about you've got this agent, you can ask it anything about the product, and it does a really good job of telling you very you know, these very specific questions. So for sure, we it's something, like, new employees are using. And, like, we have we we're hiring people, and they, you know, they they haven't used Iron Cloud before. We want them to be in front of customers, you know, as experts as quickly as possible. So the it was, like, interesting anecdote. Like, when we we had a company all hands, and I demoed this the customer facing solution to people. And, like, so many people were like, this is amay like, they were so happy. Like, this is amazing. We've been, like we've, you know, been talking about how do we do this for our customers for so long. And but then they people are like, yeah. And this is great for, like, new employees too. Like, they can learn so much. So I don't know like, I don't know if we've intentionally incorporated it as part of an onboarding program, but I'd like the benefits are internal as much as external in a lot of ways. Okay. I see there's a question that just came in from Chris. How do you start the interview process to find an AI partner? What were some of the standout questions you asked or insights they offered to say, Oh, you know, these folks know what they want to do? Yeah. I mean, you know, the I think there are there's so many solutions out there. I think we we I had connected with one of your people at Embrace that I had known from a previous job, and we were talking about it. And, like, I for me, what's important is, like, do you have proof points that you can see with other customers at scale similar size and scale of what we're trying to do? So I think that's an important question to ask. I I you know, as much as you can talk to people that have gone through the experience with a vendor to understand what's needed. I I've been on both sides of this. Like, I was, like, selling, and now even today, I'm helping sell to to our prospective clients. And I think, you know, really trying to understand where the hurdles have been, you know, plus the benefits. And then trying to assess, yeah, a vendor that's are they you know, do they understand your business? Or do they understand the problems you're truly trying to solve? That's just intuition, I think, and just making sure you're comfortable with it. But it's I I sympathize, and I'm in the same boat. There's so many vendors out there trying to figure out different ways to solve this and and from different angles. And I think if you can pilot with multiple prospects and or at least get your hands on stuff, it's helpful and get it in the hands of the people that are actually using it. So those are those are just some thoughts. Okay. Thanks. And obviously, you know, we welcome folks to do a pilot or a trial of EMRASE. I'll share in a minute or two how to get in touch with us to have a conversation. I see a question in the chat from Ori. What is the agent relying on when it comes to data accuracy? For example, if there's a change in the product or a new feature, how do you ensure that the agent output remains up to date? Yeah. So, know, Bill I mentioned Bill a couple times, but his team is doing the content. There's a a feedback loop. Right? It is part of the process, and we have a release process with new content. But I don't know specifically how he's doing it, but I'm I'm pretty confident he's using the AI to say, like, okay. If this feature changes, like, where where is the content that actually might reference that? But it's it's probably not a perfect system, but that that's definitely one thing that we're doing. And and there's just a constant process that team does in terms of freshness of data or manual review of data to make sure it's been updated. But but, yeah, clearly, we're we're we're trying we're we're trying to sure it's accurate because that's, like again, that's one of the fundamental things we're we wanna do is make sure we're giving people the right right information. All right. Thanks, Rob. Great questions from the group, and appreciate your answers on all fronts. I'm gonna head towards just wrapping up this conversation, and I'll share in sixty seconds a couple things to consider. We'd love to have a conversation with this community. We were excited to partner with Customer Success Collective and obviously Rob and Ironclad on this. And you know, we would welcome the opportunity to engage with you. I've put, you know, my email address up on the screen, sethembrace dot ai, my co founder Derek, derekembrace dot ai. We'd love to have a conversation with you, learn more about your goals. We can share a customized demo and we can head towards an unpaid trial, you know, for anybody who's attending to be able to actually get your hands on the entirety of our product before you make a buying decision. I did put the AI maturity model link in the chat, but it's something that's worth taking a look at just to shape your own thinking. It's really, you know, we think about it as vendor agnostic, you know, but it can shape some thinking around what are potential phases to go through, where to consider starting, and you can find it online at embrace.ai/aimaturity as well. First of all, I want to thank Rob for obviously being a customer and sharing his story in this way. You've got a lot of things pulling at your time, and so thank you for sharing with this community your experiences. I'm sure on behalf of the group, everybody appreciates it. We certainly appreciate it at Embrace, and thanks to the group as a whole for joining. You're investing your time in learning more about this market, the opportunities, and so, thanks for joining today. But with that, we'll close. Appreciate everybody's time, and Rob, thanks again. Yeah. Take care, so much, y'all. Seth. Take care, Of. course. Of course. Bye, all.