Video: Build portfolio visibility leadership actually trusts | Duration: 2360s | Summary: Build portfolio visibility leadership actually trusts | Chapters: Welcome and Introduction (69.725s), The Resource Tracking Problem (195.89s), The Pie Shop Framework (381.755s), Investment Themes Framework (540.275s), Real-World Applications (686.735s), Building the System (857.17s), Three-Tier System Walkthrough (1043.655s), AI and Key Takeaways (1616.935s), Cross-Team Investment Tracking (1812.455s), Resource Allocation Flexibility (1887.19s), PDLC Tracking Implementation (1977.28s), Q&A and Closing (2181.435s), Closing and Connect (2332.74s)
Transcript for "Build portfolio visibility leadership actually trusts":
Alright. Hey. Welcome, everyone. Really excited to, be here today. My name is Kasim Alani. I lead the product operations team here at Flatiron Health, and I'm excited to walk you through developing a playbook for a system that you can own and the entire business will benefit from. And so looking forward to show you a little bit more about what I mean today. Before I get started into some of the getting into some of the content, I'd love for folks to just drop in the chat and answer this question. How many of you have a system today that can tell your, chief product officer, chief technology officer, really anyone in leadership within minutes, what percentage of your r and d spend is going towards new product development versus keeping the lights on? I'll give folks a few seconds to to answer this question. Drop it in the chat. I'll give it thirty more seconds of awkward, silence, and then we'll keep going. Well, for those of you who do have a system like this today, I wanted to let you know you're already ahead of most organizations. And maybe following this chat, I can learn more from you and how you've set this up. And don't fret if you don't have this system today. I wanna show you all how you can get there and maybe apply this framework within your organization. So by the end of this webinar today, I'm gonna introduce a philosophy you hopefully take away with you called being precisely imperfect, doing things well enough to be useful without over engineering or architecting your system. I'm going to introduce a mental model or a metaphor I refer to the pie shop. Yes, we're going to be talking about pies today as a way to think about resource allocation. I'm gonna give you a three-dimensional framework that works across different parts of your organization, r and d leaders, finance, and leadership, and then give you a sneak behind the or sneak peek behind the curtain around the system we built at in at Flatiron, that's embedded within Airtable. So if you're an operator at your organization, maybe you sit within product ops, I'm gonna show you how you can make your team the one that bridges r and d and finance instead of the person stuck reconciling spreadsheets for a week every time leadership asks a question. So for those of you who sit within product ops or in operational functions, I really want you to take some time as you listen to today's webinar and think about what your function can be. This work I'm about to show you is not about making product managers' lives easier. It's not just about PM enablement. It's really thinking about how operations functions can build systems that serve the entire org from engineering teams to the c suite to the board. So that's a lens I want you to hold as we go through this. But before I show you any of that, I wanna make sure I'm describing a reality the reality that you've maybe felt at your current org or in past roles. You're in a meeting. Maybe you're on the receiving end of the outcome of a meeting, out of maybe a leadership sync. Maybe, you know, your CPO is doing board prep, and the CFO asked them, hey. What percentage of our engineering time is going towards our strategic priorities versus keeping the lights on? And before I tell you what happens next, I'd love to hear from the the audience in the chat if you've ever been the person who's been asked to pull this together. If you have, then you're in the right place. And here's what happens next. Engineering says check Jira. Finance says we have the budget breakdown, not the work breakdown. And then someone, in this case in my organization, me, would open six or seven different spreadsheets from six or seven different product groups, and each one would be named something ridiculous like q three allocation final v four revised. And then an operator or chief of staff, maybe you, spends a week or more reconciling that data. And when the answer finally arrives, let's be honest, no one really trusts it, and more likely yet, that decision that needed to get made was made without that data. And here's the thing. Most organizations fall into the trap of a false binary. Option a, they track everything. They require engineers to log hours in Jira or codify a system for tracking story points, and we know that's burdensome, inconsistent. Our engineers resent it, and, honestly, from my experience, it's probably not more accurate anyway. Option b is to track nothing. Engineering is perceived as a black box, finance guesses based on headcount, and leadership loses trust. We all know in the room here today that both of these options fail. So who's ready to talk about some pies? For me, myself, you're never gonna get me to answer the question, which one's better, sweet potato pie or pumpkin pie? I'm definitely having both. So I want you to think of your r and d organization as a pie shop. Your portfolio of pies is your product portfolio. Different pie flavors are your products. Ingredients are your investment themes, bakers are your engineers, and the recipe is how you allocate resources. Each pie flavor requires a different ingredient mix. An apple pie needs more fruit. A custard pie needs more dairy. Getting that composition wrong means a mediocre pie. An r and d investment allocation as we've conceived it helps us answer three fundamental questions. One, are we putting the right ingredients into each pie? Two, do we have enough total ingredients for all the pies we promised? And critically, and three, is our mix aligned with what we told our investors we'd bake? Looking at flour alone tells you nothing. You need to look at the full menu with ingredient breakdowns, and that will tell you everything. So how do you actually go and do this? This is the most important slide in the deck. It's what I refer to at the top of the presentation around the framework of being precisely imperfect. The temptation when you build a system like this is to track everything. Oftentimes, people equate more variables with more insight, but that's wrong. If you create too many dimensions, every question that gets asked requires an additional translation layer, and you lose the ability to have macro conversations, and the system itself will over time become a burden, not a tool. So what I encourage folks to do as they begin these conversations with their r and d leaders and finance is to limit yourself to a few key dimensions that will enable the macro conversations and allow people to ask high level questions that allow you to drill in if something feels off. The goal isn't to know every little detail about what's happening. The goal is to identify where there might be hot spots, trade offs, or key decisions that need to be made based on the data and zoom in where appropriate. One thing to flag before we go further, if you are in a tech organization, you do SaaS. Right? If your organization is doing software development capitalization, how much precision this system gives you depends entirely on how your finance team expects to defend those numbers. For us, an intentionally high level approach has worked quite well. And, hopefully, for you, that'll also be enough. But for others, you might need to layer in more and layer in more, know that going in, but the key is to get started on a equal foundation or right footing. So what are those dimensions? Before I show you, think about this. In the past, if you've been asked to categorize all of your r and d work into buckets, how many would you use? Five, ten, twenty? Hopefully, not 20. So the first dimension and the the most important dimension from my perspective are investment themes. Those are your ingredient categories. So before we had a shared taxonomy at Flatiron, r and d spoke one language, finance spoke another, and we spent weeks every quarter just translating across the, the lines of our teams. Now we have one shared taxonomy, same definition, same source, no translation needed. And if you look at the bottom of the slide, you'll notice something. The themes you've come up with map into the language finance uses for capitalization. It's okay if you're not a finance person. Here are some quick definitions for you. CapEx or capital expenditure is work finance treats as an investment, something that builds long term value and gets recorded as an asset on the balance sheet. An opex or operating expenditure is work that gets expensed immediately. It's the cost of keeping things running. And r and d credit is a tax incentive that applies to qualifying development work, which is why finance cares so much about distinguishing new development from maintenance, for example. So when you look at this slide, new product development tends to be capex. It's building something new. Maintenance, stability, regulatory work, product improvements tend to be opex. And new product development often qualifies for an r and d credit on top of that if you're fortunate in a fortunate enough position to be a profitable organization. And that's not an accident. This is highly intentional. By building a shared taxonomy, we are already doing the groundwork that enables capitalization. Whether or not you realize it is okay, but in our case, we found that this is actually enabling our finances team capitalization efforts. The other two dimensions are more isolated to what the r and d team needs to do to run effectively, objectives and product or capability lines. If you set strong objectives and have a strong planning process for defining strategy, they should just mere merely be a reflection of that. And then alongside your KRs or metrics, ultimately tell you what you got for that investment. And then product lines give you portfolio level visibility across all your product lines. Together, these three dimensions let you slice the data in ways that serve every stakeholder without creating a tracking nightmare. So what does this unlock? Let let me make this real for you. These might be two situations you've recognized or encountered in in past experiences. So for me, I work in health care or health tech, which is a highly regulated industry, and I'm sure hopefully, this hasn't happened to you, but I'm I'm sure that you've been in a place where maybe one of your product lines has gone through a period of serious stability issues. And you know what it feels like to deliver bad news to other teams when things come at the trade off of those, elements or attributes of work that come up, ad hoc. What happens when there's stability issues or regulatory work comes up? Well, product improvements get put on hold. Maybe new features are delayed. And typically, that conversation sounds like, hey. We're headed down on this right now. We'll get back to you, when talking to our customer facing teams like sales or account management. With the system we've set up, that conversation has changed completely. I can now enable our product leadership to go into a conversation with the head of account management and say, hey. Here's our portfolio allocation. Normally, regulatory work makes up about 5% of our overall portfolio investments. But over the last two quarters, it's climbed from 5% to 20%. And that's coming directly at the expense of product improvements. And here's what this means for the road map, and here are the explicit trade offs we're making. That's not delivering bad news. That's showing the trade off with data and opening up a real conversation. Not saying here's what we can't do, but here's the trade off and here's what we could do about it. The second is about resource conversations. So every product or engineering leader has been in a room asking for more investment in a product, maybe an initiative, or a team. And oftentimes, that conversation relies on instinct, relationships, or whoever makes the loudest case. This system changes the basis of that conversation. Now you're enabling that person to show exactly where the capacity is allocated, what it's currently allocated to, and what the cost of moving it would be. You're not asking. You're making a recommendation with data. And that's the difference between a request and a recommendation, enabling people to go from reactive explanations to proactive decisions. That's the shift we're starting to enable. So you might be wondering, what should my mix look like in my org? And this is a little bit of a trap slide, but there's a common reference point for r and d allocation, 7030. 70% on core product, 20 on new features or adjacent products, and 10 on moonshots. A 16 z wrote about this framework and said something that I think is exactly right. The problem with this mix is that product is so multifaceted, it defies a generalizable rule. Each of us probably work in different markets, have different competitive dynamics, are in different stages of business maturity. So these are starting points and not targets. To hit this home, let's look back at Netflix in 2027 I mean, 2007. If they had followed this, ratio to a t, they would have optimized their DVD business into irrelevance. Right? So the point isn't to hit these numbers. The point is to know your numbers, be intentional, and then have a story for why your mix looks the way it does. So how did we actually build this at Flatiron? So in q one, or in the first quarter, we had to prove the concept. So we didn't integrate this into our product operating model system, which exists in Airtable. We actually had it sit kind of parallel path. We didn't want to spend a bunch of time building a system that didn't actually answer the questions that leadership had. So we started by just standardizing data entry into sheets to test our taxonomy and see if we could enforce it consistently. And I wanna be really honest about what this looked like. That first quarter of data collection was brutal. Teams categorized the same type of work differently. One team's product improvement was another group's maintenance. We spent a lot of time in conversations like, why did you tag this as development when it's clearly a bug fix? And those conversations were extremely tedious, but they were actually the most valuable part because that's where we were able to forge that shared language. And by the end of the quarter, we had something imperfect but usable. Our COO at the time said, and I'm quoting, we never thought we could get this level of data out of our r and d org. So once we validated the proof of concept, we scaled it with the right tooling, and we moved to Airtable to consolidate it within our existing product operating system. So similar to product teams, we took a product oriented approach. We had a hypothesis. We tested a solution. Once we validated it, we continued to scale and iterate on it from there. But interestingly enough, the key adoption unlock was actually not just making it useful for leadership, but for teams themselves. When a director can pull up their own allocation data for their product group and say, wait. 40% of my team's, investment is going to maintenance, That doesn't feel right, and it's not aligned with the strategy we've laid out. They started caring about data quality themselves. So in a lot of ways, self-service has driven accuracy, and I'll show that in a second when I get into the demo. Now quarter after quarter, when I review this with our chief product officer, she continuously tells me something along the lines of this. We can't live without this system. It enables me to effectively advocate and tell credible stories on where r and d has been, where we're positioned today, and where we might head next. As I mentioned before, the goal is not strictly a reporting tool. It's an advocacy tool. So just to recap the before and after and where we were and where we are today. Previously, it took weeks to answer questions leadership had around our investment allocation. We're constantly stuck in this game of limbo, reconciling different spreadsheets from different product groups with different taxonomies. Finance for your r and d is a black box, and there was an antagonistic relationship between finance and engineering. As product ops, the team responsible for pulling this data together, we were constantly in reaction mode scrambling whenever someone had a question around this data, and we were stuck pulling reports instead of driving strategy. Now we have real time data, meaning we always have answers ready that we can serve or others can self-service. We have one source of truth with shared language across the org. Finance has become our partner and instead of an interrogator. We can service insights proactively before we're even asked. And by being the team involved in architecting the system, we can actually have a seat at the table when strategy shifts. So in the the demo today, you know, enough slides, I wanna actually show you what this looks like in Airtable. I'm gonna walk you through three views from the bottom up. We're gonna start with what this looks like for a product manager and engleader when they within what they see when they're entering data. Then we'll go up to the director or senior management level who has to keep the data honest and credible. And then we'll finish with the portfolio view, which shares both leadership and executives alike. The same data sits underneath all of these different jobs, but works in in close coordination, and it's all enabled by Airtable. So I'm gonna switch over my screen. So this is exactly what people see or PMC when entering data. And I wanna start here because this is where r and d investment tracking systems either survives or dies. If this part is too painful, the whole thing collapses because you're spending your whole time on enforcement. So let's say your, product manager is spinning up a new initiative. They fill this out once. So we'll call this webinar demo. Normally, we ask our PMs to put much more comprehensive feature or project descriptions because that flows into a bunch of different workflows, but for today, I will keep it abbreviated. We align it to the team. We track statuses. This helps us track our roadmap in, like, a Kanban model. Let's say they're in user research. Let's say this is tied to product a, and let's say this is maintenance, and then we'll align it to objective. And this is a demo environment, so this is all abstracted away from Flatiron, data. And then we'll hit create. But before I do that, I wanna note that, this mapping happens one time per initiative or per bets, or per capability offering. So what I mean by that is if an initiative traverses over multiple quarters, a PM doesn't have to come and redo this mapping every time. That mapping persists. And then let's say they're now ready to enter their allocation for the upcoming quarter. It's very simple. They simply enter the number of end weeks they, forecast out that they'll spend in the upcoming quarter on this specific initiative. So let's say it's five. And another thing I forgot to mention about these dimensions is they should be mutually exclusive and collectively exhaustive. So every initiative should fit into exactly one theme and one objective. There's no overlap, and that sounds like a small design choice. It's not. The moment you let initiatives count towards multiple themes or multiple objectives, roll ups stop adding to a 100, and the whole system loses its credibility with finance. So mutually exclusive, collectively exhaustive is what makes the math defensible. So what does the PM actually get out of this? We have a dashboard with them that kind of serves as their operating hub or system, but what you can see here is as they enter the allocations for the given quarter, they begin to roll up in one place. So based on the number of engineers they have in seat, we calculate the number of potential end weeks we have available. At our organization, we assume a discount rate of 15% for holidays, sick days, you know, maternity, paternity leave, and so forth. And as they enter their allocations for each initiative in that quarter, it'll deduct that from the, potential weeks available. So we see right now I have seven weeks remaining, and we get to a discount rate. Now the intent of the system wasn't necessarily, conceived to help with capacity planning, but inadvertently, we ended up actually solving that problem for, PMs as well. The other thing it helps PMs do is they're constantly asked by their teams, like, why is it the why does the work we're doing actually matter to the org? How are we aligned to the broader strategy? So for this PM, what they can see is a breakdown of where they're spending their time, in this case, over a couple of quarters against what initiatives. They can see how they're aligned to their strategy. So they're really anchored around a specific objective within their specific product area. And then they can see where they're invested in, against our investment categories. So this team is spending a lot of time on product improvements and new product development. So what this helps them do is kind of, again, show their teams why the work they're doing matters and how they're positioned, also against the org, not just within their team. So the next thing I wanna bring us to is the audit view. So this is what the directors or our senior managers look at regularly. So PMs are answering, what is my team doing this quarter? The the director is answering, is the data I'm about to roll up actually trustworthy? So not gonna not gonna lie to you. There are still some parts of the system that are unglamorous, but it's the most important part. All the things we show up in trending charts, discretionary splits, the thing that's ready for a board presentation is only as good as the data underneath it. So if a team forgot to enter their allocations, the roll up is wrong. If an initiatives if initiatives are consistently tagged to the wrong things, the roll up is wrong. So the audit layer is what catches that before anyone presents it. And so there are really two jobs to be done here, and they're roughly equal weight. First is completeness. Right? So did every team actually do the allocation work this quarter? This view surfaces the gap. So I'm filtered to q three in, in 2025. What we can see here is that based on the red bar I have, I can see that most of our teams have entered their allocations. Actually, all of them have, which is which is great. And then what I'm able to do with Airtable and the filtering abilities is also get a pivot of all my teams, and all their allocations based on their, headcount within their, respective, domain spaces. So, what's helpful is that because the system runs on prospective allocations, teams are telling us at the start of each quarter where they expect to spend the time. Again, if half the teams skip the entry, we don't have the core forecast, and we end up with the whole. We also care about mapping integrity. So we've created views for our, leadership to basically audit if the mappings are correct. And this is like a one time ceremony we do once a quarter for about thirty minutes. They filter down to their respective product domain space, and they just validate that each of these initiatives are mapped to the right theme, objective, and product. And they can either simply change them themselves. But if we find kind of patterns, for example, when we have a new PM start, this is a new concept, as product ops, we sit down with those folks and really help them understand the taxonomy so they can do it independently. But for the first time go around, we always wanna make sure they have that, additional assistance to make sure they get it right. So now for the star of the show and the final acts, the last view serves both our executive audience, as well as our director plus audience who are telling trying to tell credible stories about kind of where they're positioned within the broader portfolio. So I'm not gonna walk you through kind of each graph we've shown up, but I'll hit on a few that might resonate. The first is the trending investment theme. So here's where the themes have moved quarter over quarter across the entire org. So for example, if we set an OKR last year to shift more capacity into, you know, new product development, this chart tells us exactly whether that actually happened. And in this case, right, we can see there's a trending line for new product development, and it's fairly flat, quarter over quarter. For an executive, that can be signaled that strategy is showing up in the work or not. For a director, if it's in the context of their own pod, if the org is trending more towards doing, you know, new product development and their teams are still at 70% maintenance, they probably should have a credible story as to why. Maybe we're thinking about the finance perspective. So what's discretionary versus nondiscretionary? So this is again the chart that finance loves. It's broken down, discretionary versus non discretionary spend based on business unit. The pies are different sizes, the mix is different, and that's the points. Different parts of the org have different jobs to do. Maybe some right? Your platform org is gonna have a very different mix than maybe one of your GM run businesses. Maybe they're focused mostly on new bets. So now what we can do is have a talk about whether that mix is the right one instead of arguing about whether that data is right. Or, for example, let's say you're in an era of austerity and someone comes to you and says, hey. What would it just be cost to move into just keep the lights on mode and we cut all new product development out of scope for the the next six quarters? What would our actual headcount cost be? Well, this enables you to do that. And then we have, portfolio view. So maybe you're interested on, like, what product lines are we spending the most on? So we have a pivot that shows the the total number of engineering weeks by product and capability. So you can ask questions like, is our signature product getting the investment it needs? Are we over invested somewhere that's already in maintenance mode or maybe it's a zombie product? And this is the question the board might ask or leadership might ask. And because of the audit layer we just walked through, the answer isn't a guess. It's a roll up of something that's already been cleaned. So none of this has required anyone to track anything new. It's the same number as the PMs or engineering TLs entered in the first few I showed you. It's audited by our directors and senior managers, and then it's rolled up into a portfolio view. So that's the whole bet. Limit what you ask people to enter, audit what they entered, and let the structure carry the weight itself, and the same data becomes a different conversation at every altitude of the organization. So to recap, we have three core views powered by one data model. Our PMs enter one number against an initiative they already tagged, and that tagging happens just one time. Our directors make sure the entries are complete and mappings are honest, and then executives see the portfolio composition that finance and the board are asking about. We've not made anyone's life harder. Everyone gets what they need, and that's the system. I'm not gonna go into too many details about this slide, but I'd love to hear from you all afterwards if you have questions about how to set up Airtable or any table based system to build this system. Happy to be a resource. I can also, I also have a one page implementation guide of how we've done this in Airtable. And then I wanna highlight in this era of, AI adoption within many of our organizations how we've been leveraging Airtable AI to make the system even more intelligent and reduce manual overhead. So within Airtable, there are field agents as many of you probably know. And so what we're exploring is how we can use field agents to auto tag investment themes, and objectives and product lines. Some of that's predicated on the quality of the feature and project descriptions, but we've also started to integrate team roadmaps into our product operating system, which can abstract out some of that context. Any place where we can reduce PM burden on documentation, we'll try and tackle. We're using deep match. So previously, we had to write custom Python scripts to match some of our data tables, consistently. Now we're using deep match, so that it's actually a little less brittle. Airtable Omni has been a huge unlock for us in that in some cases, some of the out of the box visualizations we've created aren't the ones that, you know, a PM director wants to use in, a presentation. So we've given them kind of baseline prompts that they can customize, to build their own custom reports. And then sometimes I use it myself to flag unmapped initiatives, so I can catch gaps before the audit actually happens. And then what's really exciting is that with the Airtable MCP is that, you know, when we're using tools like Cursor or Claude, is that we can join this with other data sources we have. We're fortunate enough in that our entire product operating system lives in Airtable. So what we can do is ask questions that point not just to this investment, allocation data I've shared, but it can also point to our roadmaps, our strategy decks, our our updates for communicating progress month over month. And so that can enrich the quality of answers and insights we're be we're able to derive from our system, but, again, it's entirely powered by having everything structured into tables in Airtable. So before we jump into to q and a, I want to leave you with three things. First, the menu matters more than the ingredients. It's really about understanding your portfolio composition. Two, be precisely imperfect. Limit your dimensions. Enable macro conversations. I'll consider this webinar a win if each of you go back to your organizations and use the phrase precisely imperfect with someone in the next week. Three, product ops is the bridge or operations functions in general. We have the ability to speak the many languages of our organizations. So we have the ability to create a shared system and shared language that serves r and d, finance, and the board without making anyone's life harder. We are uniquely positioned at the intersection of r and d execution, strategic planning, and business operations to build systems that serve the entire organization. And we're not from what I've seen in my experiences, no one else is gonna go do it, so go to do it and solve that problem. Alright. Let's go to q and a. Yeah. So what does data look like when an investment overlaps a year? How do you handle it when initiatives span multiple r and d teams? It's a great question. Okay. So it's okay if initiatives traverse over a year. The way I think about maybe I'll answer this in, like, I wish I had my real, flattering data I could share with you, but this happens all the time. That's fine. I think the there are a couple I'll I'll point out some of the slippage moments that happen actually is we have this annual planning cycle that we're kind of tied to. So it gets a little muddy when, like, objectives change. I'm not going to lie, but it's definitely a breakout conversation I'd love to to maybe talk with you offline about. And then sometimes, like, let's say you're launching, like, a new product line and that traverses many teams. We actually ask that each team codifies their respective contribution to that investment independently within their scrum team roadmaps. So there are some limitations in the system that are gated maybe or creates, constraints based on your org design, which are not necessarily blockers for for us. So hopefully I answered both questions there. I would assume you're not reallocating resources quarterly. If you do this annually, how are you thinking about number of end weeks for initiatives that have not been discovered yet? I'm just trying to digest the question. So our resources are somewhat fungible. So I think we're in a fortunate position while we do have, like, a budget that gets rolled out at the top of each year. There is flexibility and buffer built in, and we'll meet move people across teams as needed based on how strategy is materializing throughout the year. And then the things for initiatives that not have not been discovered yet, that's fine. I think we're using engineering as a proxy primarily. So we know that there are teams who are doing active discovery before they start product work. They're going to talk to customers. So you you have, like, designers and PMs who are in the field, potentially. In this case, like, engineering is our best proxy, and our ratios are fairly consistent across the org. So we kind of, again, with this concept of precise imprecision is, we're looking for macro trends over time. We actually are okay with, like, the five the the five to 10 variance that might happen within a given quarter because from what we found in writing this over the last about two years is that it all kind of comes out in the wash, and the trends have kind of matched our mental model or heuristics generally. And where they haven't, it's led us to ask deeper questions, and, typically, we can answer those locally. What is okay. In the chat. And Laurel and Kelly, was I maybe if you just wanna drop a response in the chat, did I answer your questions, or did you wanna dive in deeper into any of those topics? How closely is the stage of the PDLC tracked? Great question. Okay. Where we do this really well is where our teams are delivering value in the shape of the features. And the reason why is because it's really important from the customer experience side of things. So when you have, like, our team maybe I'll break up, like, take a step back. This can become convoluted. We know that our product orgs are complicated and our teams are complicated. We have some teams who are like, we ship units of value by features. Some teams are providing services to other internal teams. Some teams are doing a combination thereof. Some teams don't have, like, a unit of value that maps directly to a feature. And so we have multiple kind of rails that run within our org. So for as a for what I mentioned before, our entire, like, marketing feature launch process enables us for anything that's feature oriented to make sure that those are really well updated and groomed and maintained. And then what we also have is a monthly execution update system where each PM basically, however they so choose, they can enter them directly in Airtable. They can use Cursor, CloudCove, whatever. They enter their updates, and they're asked to update the PDLC status at that point in time. So the most will be stale on those is about a month's time. And then I'll just say it is hard. We are split across an organization that does very different types of product work, and so, how to create a a simplified set of statuses can be hard. We are in a position I'm not going to to lie here. We have, like, multiple iterations of statuses in different places within the org because different folks want to see different things. We try to limit this as much as possible, but it's definitely a pain points. And then, you mentioned epics and stories. Look. Our Jira instance is a mess, and so one thing that we made an intentional choice around was we were not going to do Jira standardization as part of this effort to stand this up because it would have taken way too long. So right now, our Jira instance and Airtable data is entirely mutually exclusive. Jira is very much just focused on, like, the engineering execution layer. Seeing questions slow up. What is yeah. It's a good question. I think maybe not directly related to my topic. I can opine on it. I think that it depends where we want to go. I can talk about it through the lens of something like this. So our head of product is still responsible for making sure that the broader product strategy is reflective of the business context. And so what I think a system like this enables someone like her to do is spots opportunities and risk more effectively and dynamically. And, like, the light the the cycles between learning, conversation, decision become a lot closer. And so in my role as the head of product ops, I'm thinking about how do I build her operating system so she can do that more effectively and kind of be on top of what's happening in the org, which can be really complex when you have a portfolio of, you know, 40 plus different product teams. And then I think what previously would have taken maybe, like, me or a data analyst kind of combing through our portfolio allocation data, our roadmaps, our execution level updates, our customer feedback, that would have taken like, takes, like, took weeks. And now we have systems where we can really serve that insight in real time and also make it self-service. So maybe that goes back to the the cycle and loops. I'm not necessarily gonna opine on what it means for PMs within the broader PDLC, but I think that PMs still have a role to play, especially as the throughput of ideas and ability to bring something to market or to customers moves faster. I don't think it takes away the core role of a PM, which is to identify viable and valuable solutions to bring to customers to solve real problems. So will it change how they work and how they're expected to do those things, and what are the baseline expectations? Absolutely. I think it's how that shift is taking place in different orgs is is, quite variable. Leonardo, hopefully, that answered your question. Okay. Well, you can find me on, LinkedIn. It's just my name. If you want the one pager, I'll be happy to to share it, and happy to connect with anyone who wants to chat or explore this topic more deeply. Thanks again for joining us.