You are running Performance Max and Standard Shopping at the same time. PMax is showing a 600% ROAS. Standard Shopping shows 280%. Overall revenue has not moved. Which one is actually driving sales?
This is one of the most searched questions in PPC communities right now, and the honest answer is: it is harder to know than Google’s reporting suggests. Here is why, and what you can actually do about it.
Why Attribution Gets Complicated With Multiple Campaigns
In Google Ads, when a user clicks multiple ads before converting, a conversion model determines which click gets credit. The model Google uses by default is data-driven attribution, which distributes credit across the touchpoints in the conversion path based on their predicted contribution.
When you run PMax and Standard Shopping simultaneously, the same user can be touched by both campaigns before converting. In a data-driven model, both campaigns might receive partial credit for the same conversion. This is not wrong — it reflects that both touchpoints contributed — but it means the numbers in each campaign’s conversion column are not clean, isolated measures of that campaign’s performance.
The specific problem with PMax: PMax captures both upper-funnel touchpoints (Display, YouTube impressions, Discovery) and lower-funnel ones (Shopping, Search). When a user sees a PMax Display ad early in their journey, browses away, later clicks a Standard Shopping ad, and converts — both campaigns receive partial credit. But the Standard Shopping campaign was the last touch that drove the conversion decision, and PMax’s credit reflects an early touchpoint that may or may not have been necessary.
Why PMax ROAS Can Look Good When Sales Are Not Moving
Several dynamics cause this:
Brand term capture. Without brand exclusions, PMax bids on searches for your store name. Someone searching for “YourStoreName.com” directly is already intent-to-buy traffic. PMax captures that click, counts the conversion, and takes full credit for a sale that was going to happen regardless. This inflates reported ROAS without reflecting genuine advertising-driven incremental revenue.
Retargeting cannibalization. PMax allocates budget across all inventory types, which includes retargeting through Display and YouTube. A user who abandoned their cart yesterday was going to convert at a high rate regardless of whether they saw a retargeting ad. PMax can capture these high-conversion-rate users, report excellent ROAS on them, and consume budget that could have gone to prospecting new customers.
Attribution model credit allocation. As described above, data-driven attribution may assign PMax significant credit for conversions that would have happened through other campaigns or organic channels anyway.
The combination of these three effects means PMax’s reported ROAS can be substantially higher than its true incremental ROAS. And because PMax is a black box that does not expose placement-level data, it is difficult to decompose which portion of its reported conversions are genuinely incremental.
How to Get a More Accurate Read
You cannot perfectly isolate incremental ROAS for PMax without running a controlled experiment. But you can take steps to make the reported numbers more meaningful.
Apply brand exclusions. This is the single most important step. If PMax is not excluding brand terms, its ROAS is contaminated by brand traffic. Once brand exclusions are applied and the campaign runs for 2-3 weeks, you will typically see reported ROAS decrease — but this is a more honest representation of what PMax is doing with non-brand budget.
Compare total account revenue, not campaign-level ROAS. The most useful attribution question is not “what is PMax’s ROAS” — it is “when PMax is running at this budget, does total store revenue increase compared to when it was not running, or was running at a lower budget?” This requires looking at revenue over time and across account-level data, not campaign-level metrics.
Run a holdout experiment. Google Ads has a campaign experiments feature. You can create a draft of your current campaign structure and run a 50/50 traffic split — one group sees PMax, one group does not. Revenue difference between the groups is your best estimate of PMax’s true incremental impact. This requires enough traffic to reach statistical significance and a few weeks to run, but it is the only way to get a genuinely clean answer.
Use Google Analytics 4 alongside Google Ads data. GA4 uses its own attribution model (data-driven by default in GA4, but you can check this). Comparing conversion credit in GA4 versus Google Ads can reveal where the two attribution models disagree and give you a more complete picture of the conversion path.
When Your PMax ROAS Looks Great But Revenue Is Flat
If PMax ROAS is high but revenue is not responding proportionally, run through this checklist:
Is PMax spending heavily on brand terms? Check Search term categories in the PMax insights report. If brand-related terms dominate, brand exclusions are the immediate fix.
Has your Standard Shopping campaign declined in spend since PMax launched? PMax has priority over Standard Shopping for the same product when both are eligible. If Standard Shopping’s spend dropped significantly when PMax launched, PMax may be cannibalizing rather than adding incremental reach.
Is the conversion window long? If your conversion window is 30 or 90 days, PMax may be claiming credit for purchases that were initiated well before the PMax ad. Comparing 7-day conversion data versus 30-day can reveal how much of reported ROAS is from long-window attribution.
Has PMax been running long enough to exit the learning phase? In the first 4-6 weeks, PMax is still figuring out where to allocate budget across inventory types. Performance during this window is not representative.
Comparing PMax vs. Standard Shopping Side by Side in Looker Studio
Building a side-by-side comparison in Looker Studio helps you evaluate relative campaign performance without juggling multiple Google Ads views.
Set up the data source:
Connect your Google Ads account as a data source. The native Google Ads connector in Looker Studio exposes campaign-level performance data including campaign name, campaign type, impressions, clicks, cost, conversions, and conversion value.
Build the comparison table:
Create a table with Campaign as the dimension and these metrics as columns:
- Cost
- Conversions
- Conversion Value
- Cost per Conversion (calculated field:
Cost / NULLIF(Conversions, 0)) - ROAS (calculated field:
Conversion Value / NULLIF(Cost, 0)) - Click Share (if available from the connector)
Filter the table to show only your Shopping-related campaigns (PMax and Standard Shopping) by adding a filter on Campaign Name contains “Shopping” or however your campaigns are named.
Add a time comparison:
Use a date range comparison (this month vs. last month, or the period since PMax launched vs. the equivalent period before) to see how campaign-level metrics evolved over time. A table that shows PMax ROAS this period vs. last period, alongside total account revenue this period vs. last period, surfaces the gap between campaign-level reported performance and actual business outcomes.
The chart that tells the most:
A time series showing total account Conversion Value per day alongside PMax’s daily spend is often the most revealing single view. If you see PMax spend increasing while total account conversion value stays flat, that is a strong signal that PMax is not driving incremental revenue — it is redistributing credit from other channels.
The Structural Decision: PMax + Standard Shopping or Just One?
If you have the same products eligible in both PMax and Standard Shopping simultaneously, PMax will win the auction by default. This means Standard Shopping is only showing on queries that PMax did not bid on — the leftover inventory, essentially.
For most accounts, running both simultaneously without careful product exclusions creates a situation where PMax is dominant and Standard Shopping is a minor supplement. The question of “which is performing better” becomes hard to answer because they are not competing on equal terms.
A cleaner structure:
- Run PMax for your full catalog with brand exclusions
- If you want Standard Shopping coverage, exclude specific product groups from PMax and keep those products in Standard Shopping only
- Now you have a genuine comparison between the two — different products, different campaigns, no overlap
This is more work to set up, but it gives you actual comparative data rather than the muddled view that comes from both campaigns competing for the same products.
Alternatively, run only PMax for 90 days, gather its performance data, then evaluate whether the additional complexity of Standard Shopping alongside it is worth it for your specific account. Many store owners running PMax alone with good signals, brand exclusions, and clean conversion tracking find it performs well without the management overhead of maintaining two campaign types simultaneously.
Related Posts
How Google Ads Tracks Sales Across Multiple Sessions: Attribution Windows Explained
Incrementality Testing: How to Know If Your Google Ads Are Actually Driving Sales
Google Ads for Stores With a Small Product Catalog: What Actually Works
Need Help With Your Google Ads?
I help e-commerce brands scale profitably with data-driven PPC strategies.
Get In Touch