.png)
Why pLTV Models Alone Won't Optimize Your Campaigns
TL;DR: Many companies build LTV models but few activate them successfully in paid media. Most models optimize for accuracy (minimizing average error) rather than auction performance, and platforms expect deterministic events, not probabilistic predictions. Activating predictive models as a pLTV media strategy requires a purpose built model, but more importantly, Signal engineering - the practice of translating predictions into a platform-ready signaling strategy to - bridge that gap.
Last year at Meta's LTVision Bootcamp, the host opened with a question: "Raise your hand if your company has a prediction model in place."
About half the room raised their hands.
Then she asked a follow-up question: “How many of you have successfully activated that model in your media strategy?”
Zero hands.
Half the room had models, but none had successfully activated them. That moment confirmed what I'd seen across the industry for years - a model alone is not enough to influence campaign performance.
I recently unpacked this challenge with Jinx Joglekar and Anish Raul on The Art of Commerce podcast. Listen to the full episode above or read on for a deeper dive into why this gap exists and what it takes to close it.
Why Prediction Models Are Getting Built, But Not Activated
More growth teams than ever recognize they need to deliver higher-quality signals early in the optimization process. Ad platforms learn fastest when they receive clear feedback within short conversion windows - typically seven days. The problem is that for most businesses, customer value doesn't surface that quickly.
If your most meaningful outcome happens on day 41, you can't wait to send that signal. You're forced to optimize toward a proxy - trial starts instead of retention, seven-day revenue instead of 90-day LTV. Those proxies teach platforms to find users who convert, not necessarily users who stick around.
For businesses where value manifests later, prediction is the only way to send meaningful signals early. That's why so many companies are building LTV models - to give platforms early visibility into customer value beyond what's observable in seven days.
For a deeper dive into why this matters more than ever, see In the Age of Ad Automation, Signal Quality is Your Biggest Advantage.
So, the models are getting built and validated. But there's a gap between prediction and performance that's wider than most teams realize, and it breaks down for two distinct reasons that compound each other.
Problem 1: Most Prediction Models Are Optimized for the Wrong Objective
Standard LTV models minimize average prediction error. That's right for internal uses like forecasting, segmentation, and budget planning — contexts where being directionally accurate across the full user base is what matters. When you send those same predictions to an ad platform as optimization signals, however, the requirements change.
These models are designed to minimize average error. They overpredict what low-value users are worth and underpredict high-value ones (known as “regression to the mean”). Across all users, the errors balance out. But in ad auctions, those errors don't balance and instead multiply. When you overpredict on low-value users, you're telling platforms those users are more valuable than they actually are. Platforms take the path of least resistance to hit your targets, so they'll deliver more and more of those overvalued users who perform worse than you expected.
A prediction model built for paid media activation needs a fundamentally different design:
- Optimization for ranking and separation. The job is to reliably distinguish high-value from low-value users, not just be accurate on average.
- Penalize mispredictions differently. Overvaluing a bad customer should hurt the model more than undervaluing a good one, because the cost of error is asymmetric: overvaluing low-value users leads to high media waste, as you're paying premium prices for low-margin returns.
- Continuous retraining. The moment you activate predictions, you change which users platforms bring you. Your model needs to keep learning from this new audience, not just the historical one it trained on. This is a direct result of the dynamic feedback loop created when running your model against a black-boxed AI delivery algorithm.
Problem 2: Ad Platforms Expect Deterministic Events, Not Probabilistic Predictions
Even with the right model design, there's a bigger mismatch. Platform delivery algorithms have been trained across billions of impressions to react to deterministic signals - events that actually occurred, with clear timestamps and conversion windows.
Predictions are fundamentally different in three ways:
- They're probabilistic. A prediction doesn't say "this user is worth $100." It says, "Based on current data, this user is likely worth $100 at a confidence level of 62%."
- They evolve. Predict again tomorrow with more behavioral data and the score changes. This creates a critical decision gate: At what confidence threshold do we trigger the signal? Acting on evolving data requires balancing signal speed against platform stability. Also, it means the signal may need to change and evolve over time.
- They represent behavior that hasn't happened yet. There is no event, no timestamp, and no conversion to point to.
Sending a raw prediction to a platform built for deterministic events - without accounting for these differences - leads to campaign instability, erratic learning behavior, and wasted spend. The platform doesn't know it's receiving a probability. Platforms treat every signal as a final truth. A strong model is necessary but not sufficient, because prediction and activation are entirely different disciplines.
Signal Engineering Turns Predictive Models into Signals
The prediction model is necessary, but it isn’t where the work ends. Between a working model and actual media performance sits signal engineering. This is the ongoing discipline of shaping how predicted value interacts with real auctions, real pricing dynamics, and real platform behavior.
Predicted value signals are constantly evaluated based on how platforms actually respond. When spend flows toward low-quality users, how value is expressed gets adjusted so investment rebalances toward the users who actually matter. It's a continuous feedback loop between your predictions and the auction — mediated by a layer most teams don't even know they need until they try to activate their model and either nothing changes, or performance actually gets worse.
For a practical, in-depth guide to signal engineering - how to determine timing thresholds, set update cadences, normalize predicted values, and design platform-specific strategies - see our Signal Engineering FAQ.
What Makes This So Resource-Intensive
- Every platform is different. While all ad platforms reward early signals with better performance, Meta prioritizes accuracy by design: once a value signal is sent, it can only be updated upward. This is just one example of many platform-level differences that you only learn from spending thousands of dollars testing hypotheses across live campaigns.
- Auction dynamics shift constantly. A competitor doubles their spend, and suddenly your signals that worked last month stop converting. Platforms update their algorithms, and your carefully tuned strategy needs rebuilding. You're troubleshooting in real-time while spend continues.
- Business evolution breaks everything. New pricing model? Your predicted values are now systematically wrong. Enter a new market? Your signals are trained on the wrong customer base. Launch a product? What "valuable" means just shifted, and your entire signal strategy needs reengineering.
Without dedicated platform expertise, most teams spend 6-12 months in expensive trial and error, learning through wasted budget what could have been learned through experience - and with constant platform shifts, that learning never stops. Even teams that figure it out once need to keep adapting or risk falling behind again.
Where to Start If Your Models Aren't Driving Results
Most organizations don't realize the second part exists until the first part fails to deliver results. If you're seeing this gap in your own organization - models built, predictions validated, but campaign performance flat - start with these three questions:
- Has your organization aligned on what "valuable" means? In specific, measurable terms that the board, finance, product, and marketing all agree on.
- Is your prediction model built for activation or accuracy? If it was designed for accuracy alone, it may be actively misleading ad platforms.
- Have you built safeguards against overpredicting on low-value users? Overvaluing bad customers costs more than undervaluing good ones, but most models treat all prediction errors equally.
Signal engineering builds on top of it, translating predictions into high-fidelity, platform-native signals that influence auctions. One without the other leaves you stuck where half the room at Meta's bootcamp was: models built, but not activated.
Subscribe for more
Read expert stories, interviews, reports, insights and tips for profitable growth.

.png)

.png)
