You can run a Google Ads account with excellent reported ROAS, clean conversion tracking, and well-structured campaigns — and still not know whether your ads are actually driving incremental revenue.

The reason: every attribution model, including Google’s data-driven model, answers the question “which ads got credit for conversions?” It does not answer “would those conversions have happened without the ads?” These are different questions, and only the second one tells you whether advertising is creating value.

Incrementality testing is the discipline of answering the second question.

Why Reported ROAS Is Not the Same as Incremental ROAS

When someone clicks your Google Shopping ad and buys, that conversion is attributed to your campaign. But consider what happens when you pause that campaign for a week:

The revenue from the first two groups is not incremental to your ads — it would have happened regardless. Only the third and fourth groups represent revenue that advertising is genuinely responsible for.

An advertiser with strong brand awareness, high organic ranking, and direct traffic may find that pausing their Shopping campaigns costs far less revenue than the reported ROAS would suggest. An advertiser with weak brand awareness and low organic visibility may find their ads are fully incremental — nearly every conversion from a paid click would not have happened organically.

Most accounts fall somewhere between these extremes, and without testing, you do not know where.

Method 1: Geo Holdout Test (The Gold Standard)

A geo holdout test is the most rigorous incrementality method available to most ecommerce advertisers. The principle is simple: split your addressable market into two geographic groups, run ads to one group, pause or reduce ads to the other group, and measure the revenue difference between them.

How to set one up:

Step 1: Choose comparable geographic regions. You need regions that are similar in size, demographics, and historical purchase behavior. For a UK store, this might be comparing regions like the Midlands vs. the North West. For a US store, paired DMAs with similar income demographics and population size.

Step 2: Assign regions to treatment (ads run normally) and holdout (ads paused or significantly reduced). Use random assignment where possible — do not put all your best markets in the treatment group.

Step 3: Run the test for at least 4 weeks to account for weekly seasonality and accumulate statistical significance. Shorter tests are usually underpowered.

Step 4: Measure revenue in each group. Use your Shopify backend or order management system to segment orders by shipping destination, not by attributed channel. This gives you an order count that is not dependent on ad attribution.

Step 5: Calculate incrementality: (treatment group revenue per capita - holdout group revenue per capita) / holdout group revenue per capita. This is your incremental revenue lift from advertising.

Step 6: Compare to ad spend to calculate true incremental ROAS.

Limitations: Geo holdout tests require sufficient order volume per region to reach statistical significance. A store with 30 orders per month nationwide cannot run a meaningful geo test — there are not enough orders to detect a signal above noise. A store with 500+ monthly orders spread across regions can run a credible test.

Method 2: Google Ads Campaign Experiments

Google Ads has a built-in experiment framework that splits auction-level traffic between a control (existing setup) and a treatment (a change you want to test). This is the easiest incrementality test to run within the Google Ads interface.

For PMax and Shopping campaigns, you can create a draft experiment that runs alongside your existing campaign at a 50/50 traffic split. The experiment can test:

To test full incrementality: create a campaign experiment, set it to 50/50 traffic split, and in the experiment variant, set the campaign to paused. After 4+ weeks, compare conversion volume between the original campaign (running normally) and the paused variant. The revenue difference attributable to the traffic that would have come through the paused campaign is your incrementality estimate.

This method has an advantage over geo tests: the split happens at the auction level, which is random by nature. You avoid the geographic self-selection bias that can affect geo tests.

The limitation: you need enough conversion volume in the experiment to reach statistical significance. Google Ads shows a statistical significance indicator in the experiment results. Do not draw conclusions until confidence is above 95%.

Method 3: Conversion Lift Studies

Google offers Conversion Lift studies for Google Ads accounts with sufficient scale (typically $30k+ monthly spend). These are managed tests that Google runs on your behalf, using a holdout group that is systematically not shown your ads and measuring conversion rate differences.

Conversion Lift studies are more rigorous than DIY geo tests because Google controls the holdout randomization at the impression level. They require working with a Google Ads representative to set up.

If your account is at the scale where this is available, it is worth requesting. The output is a measured incremental conversion rate and an incremental ROAS figure that is more methodologically sound than any self-service test.

Method 4: Ghost Bids / Causal Impact Analysis

For accounts running Meta Ads alongside Google Ads, a time-series causal impact analysis can estimate incrementality without a controlled experiment.

The approach: use Google’s Causal Impact library (an open-source R/Python package) to model what revenue would have been in the absence of advertising, using control variables like organic traffic, email revenue, and external demand signals. The gap between modeled baseline and actual revenue during the advertising period is your incrementality estimate.

This is more technically complex and less precise than a controlled experiment, but it can be done with historical data without running a live test. It is useful for establishing a prior before investing in a full experiment.

What to Do With Incrementality Results

If your test shows high incrementality (80%+ of attributed conversions are truly incremental), your reported ROAS is a relatively accurate guide. You can trust Smart Bidding’s optimization because its inputs reflect real advertising-driven value.

If your test shows low incrementality (40% or fewer attributed conversions are incremental), you have a problem. Your platform-reported ROAS is significantly inflated by credit for organic, direct, and brand conversions that would have happened without ads. The appropriate response is not to panic — it is to restructure where you spend and where you do not.

Common low-incrementality findings and responses:

Brand campaigns are low incrementality. Your brand organic ranking is strong — most users searching for you would find you through organic results if the paid ad was not there. Reduce brand campaign spend or switch to a Maximize Clicks strategy to lower CPC rather than maintaining a tROAS-optimized campaign.

Retargeting is low incrementality. Your cart abandoners are returning to buy regardless of retargeting ads. This is common for stores with strong email marketing — the email re-engagement is doing the work, and the retargeting display ad is just getting attributed credit. Reduce retargeting spend and measure whether cart recovery rates change.

Non-brand Shopping is high incrementality. This is where your advertising spend is actually creating new revenue. Invest more here.

The Honest Baseline

Most ecommerce stores have never run an incrementality test. Their entire advertising decision-making is built on attribution data that may significantly overstate ad-driven revenue.

This does not mean Google Ads is not working. For most stores it is. But the degree to which it is working, and where in the account the real value is concentrated, requires testing to know.

Running one geo holdout test or campaign experiment per year — for your largest campaign during a representative traffic period — gives you a calibration point. You may find that your reported 400% ROAS reflects 280% true incremental ROAS. That is still profitable, and now you have a realistic benchmark to manage against.

Or you may find that your non-brand Shopping campaigns are nearly fully incremental at 350% ROAS while your retargeting at reported 900% ROAS is barely incremental at all. That reallocation finding alone justifies the test.

The goal is not to prove that advertising does not work. It is to know which parts of your advertising are working, to what degree, so budget follows actual value rather than attributed credit.

Related Posts

How Google Ads Tracks Sales Across Multiple Sessions: Attribution Windows Explained

Google AdsConversion TrackingAttributionEcommerceGoogle Ads Strategy Series

How Remarketing Lists Work in Google Ads — and Why Most Ecommerce Stores Set Them Up Wrong

Google AdsEcommerceAnalyticsGoogle Ads Strategy Series

ROAS Is the Wrong Metric: How to Track Profit Margin and POAS in Google Ads

Google AdsEcommerceAnalyticsGoogle Ads Strategy Series
Adnan Agic

Adnan Agic

Google Ads Strategist & Technical Marketing Expert with 5+ years experience managing $10M+ in ad spend across 100+ accounts.

Need Help With Your Google Ads?

I help e-commerce brands scale profitably with data-driven PPC strategies.

Get In Touch
Back to Blog