Skip to main content

BETA: UMM Insights

Get deeper insight into the UMM model: accuracy, certainty, and cross-channel impact

Lily Mineur avatar
Written by Lily Mineur
Updated this week

The UMM Insights dashboard is designed to give you a clearer picture of how your marketing channels contribute to conversions, not just from clicks, but from views too.

Because UMM is based on ML & AI, it includes some level of uncertainty. This dashboard helps you understand how accurate the model is, how confident we are in its predictions, and what that means for your campaign performance.

Watch the video to learn how to navigate the UMM Insights dashboard:


UMM Accuracy

Model Accuracy Rate

This shows how often the model's predictions fall within the expected range. This is a practical way to evaluate how well the model reflects reality.

We use 80% confidence intervals because they offer a good balance: narrow enough to reflect how well the model performs, but wide enough to account for natural uncertainty.

An accuracy rate of above 80% on training (historical) data suggests that the model is working well.

Session Inclusion Rate

This tells you how many of your total sessions are included in the model. It’s based on which traffic sources we can reliably analyze.

For example, if you have traffic coming from Google, Meta, Direct, and Organic, but only Google, Direct, and Organic can be modeled reliably, the inclusion rate is:

Inclusion Rate = (Google + Direct + Organic sessions) / All sessions

Some sources may be excluded if:

  • There’s not enough data yet (we need at least 30 days with 50 sessions per day).

  • Top-of-funnel channels that are predominantly awareness/ reach-focused. These often generate few sessions, particularly with low spend, and are unlikely to be influenced by other top-funnel views.

  • They behave irregularly (e.g., email newsletters sent in bulk).

Mean Absolute Error (MAE)

MAE tells you how far off the model's predictions are (in the average number of sessions per day).

  • Lower is better.

  • MAE is relatively robust to outliers.

  • Keep in mind: what counts as a "big" error depends on your traffic scale. An MAE of 10 sessions might matter a lot if your daily total is 50, but not if you’re getting 10,000 sessions.

Model Accuracy Graph

This graph shows how confident the model is in its predictions over time.

  • Wider forecast bands = more uncertainty (often seen in future predictions).

  • Narrower bands = more confidence.

Sometimes you’ll see the model diverge from actual results due to sudden market or seasonal changes. Don’t worry, this doesn’t mean the model isn’t working. The performance estimates remain valid. The model retrains weekly and will adjust.

Good to know: The model won’t always try to match every peak or dip (like during big promotions), as that can cause over-attribution to certain channels.

How to know if the model is performing well?

A good-performing model generally meets these two criteria:

  1. Model Accuracy Rate ≥ 80% on historical data.

  2. MAE ≤ 20% of total sessions, assuming normal conditions (no big outliers).

If your Session Inclusion Rate is under 50%, it may mean:

  • Some UTM parameters aren’t tracking correctly.

  • You don’t yet have enough session data from some sources.

  • Some channels are irregular or not data-rich enough.


Clusters & Certainty

The model analyzes performance by clusters: groups of campaigns with similar goals or formats.

Examples:

  • Google: we use Google's ad network classifications to separate Search, Display, PMax, Shopping, and YouTube.

  • Meta: grouped by optimization goals (e.g., Conversion, Sell, Tell, Touch, Lead Gen, etc.).

  • Other channels: each is modeled as a single cluster, unless CPM variations within the channel justify splitting into sub-clusters (e.g., awareness vs. retargeting campaigns).

Certainty Range

Each result comes with a confidence range.
Narrow = high certainty. Wide = less certainty.

Example:

  • ROAS of 3 with a range of [2.5, 4] = pretty confident.

  • ROAS of 3 with a range of [1, 5] = much more uncertain.

Take into consideration:

  • High certainty indicates strong model confidence in effects and attribution.

  • High uncertainty in both direction requires caution in interpreting results.

  • High uncertainty in direction of higher values indicates that the cluster may have high potential, but more data is needed to confirm it.

High uncertainty usually means:

  • Low spend or not enough data.

  • The campaign hasn’t run long enough.

Wide ranges can mean potential for higher or lower performance, so it's important to check the full interval range.


UMM Results

See where the view effects happen.
You'll see:

  • Click-based attribution (MTA)

  • View-based attribution (UMM)

Together, these show how your ads work across the funnel, even when they don’t get clicked.

Example:

Let’s say your Meta Conversion campaign shows 100 attributed orders from impressions.

  • 80 came through Google sessions,

  • 10 through Direct,

  • 10 through Organic.

That means Meta impressions are causing session starts on other channels that eventually convert, in this example, especially Google. This cross-channel view effect is one of UMM’s most powerful insights.

Note: We focus on cross-channel effects. For instance, Meta views leading to Meta sessions are expected, we’re more interested in how Meta views help other channels.

Performance Dashboards vs UMM Insights

When analyzing UMM performance across dashboards, it's important to understand how metrics differ between the Performance and Insights pages. Let's take Meta as an example.

  • Paid Performance

    You will see the interaction effect between channels. Channels have stronger performance when they run simultaneously. In this scenario, it would mean that Meta performs better when combined with, for example, a TV commercial.

  • UMM Insights
    You will see only the view effects of Meta on other channels, specifically:

    • How Meta view impressions contributed to the performance of other channels, and

    • How that ultimately resulted in orders attributed back to Meta

This helps isolate how Meta is assisting other channels via impressions (view-based).

How to use this data?

This dashboard helps you:

  • Identify which campaigns have strong or uncertain effects.

  • See how channels support each other, even without clicks.

  • Understand why some campaigns have higher UMM attribution than MTA attribution.

With these insights, you can:

  • Improve your campaign strategy.

  • Prioritize channels with proven cross-channel impact.

  • In case of low certainty: experiment by increasing the spends, or wait for more data to collect.

Navigate to your UMM Insights dashboard straight away.

If you need help interpreting your UMM results, you can always reach out to support@billygrace.com.

Did this answer your question?