If you’re an avid reader of our blog, (which you absolutely should be!) you know that we talk a lot about the good side of LTV-based optimization. I’m talking about making more informed marketing decisions; seeing greater profitability; increased scalability; and so much more.
So as far as data-driven teams are concerned, predictive marketing is the ultimate way to go for growth, right? The quick answer is yes. DTCs such as Ipsy/Boxycharm have taken the jump, and even top PLGs such as Notion have made the shift, to scale growth and reduce CAC. It’s amazing for us to watch predictive UA become the standard for growth teams of a similar size to the likes of Ipsy and Notion, and the success stories that come out of it makes all our efforts worth it.
On that note, it would only be fair to also talk about the *dark side* BUT WAIT! When done right, only good comes out of LTV-based decision-making, however there are obstacles that can get in the way of building good LTV models.
Podcast on calculating LTV, with xGrowth
Our COO, Oren Cohen, expanded on that point, and many others in a recent podcast with xGrowth with Shahin Hoda, titled, “How to Calculate Your Lifetime Value (LTV).” In that podcast, Oren provided a crash course on the basics of what customer lifetime value is, and the components of LTV data. The components can include data elements such as subscriptions, signups, virality effects, referrals, upsell, cross-sells, free trials, and more.
After covering the basics of LTV-optimized user acquisition for DTC and PLG companies, which included a deep dive into the differences between handling zero-and first-party data, Oren transitioned into talking about the obstacles to building good LTV models.
So let’s get right to it, shall we?
The three obstacles to building good LTV models
This segment starts with our amazing host, Shahin, asking Oren to go deeper into talking about some of the obstacles that prevent companies from utilizing or building LTV models.
- Limited understanding of the business use case: Who is building the LTV model, and what is it being built for? Do they fully understand the business use case? For example, if the data science team is dealing with the product, and they are the ones trying to build an LTV model for user acquisition—they need to fully understand the business use case at hand, and all its nuances.
- Oversimplification: When there isn’t an evaluation framework, there’s a lack of knowledge on how to evaluate models, which leads to oversimplification. That’s typically a lot of assumptions of maximum business impact, such as assuming all cohorts will behave the same way, and revenue will always be X on day 20, or Z on day 180. By doing this, you miss out on efficiency.
- Assuming the same LTV across users: This ties into the previous point. It’s easy to assume that there isn’t a variance in LTV across users. Oren says, “While the average LTV might be the same, it holds a big variance among users, or workspaces. So with this inefficiency, you end up paying the same price for high value users and low value users.”
We as a collective have reached a point where intuition is meaningless, and data is crucial for growth teams to make educated campaign decisions. And when it comes to making informed decisions that are also profitable—that is where LTV optimization comes into play. This is because the LTV-optimized approach focuses on smaller and more targeted groups, whose value is higher-than-average. And soon after you take this approach, over time you will see that the campaigns which optimize for maximum LTV show exponentially higher ROI.