One reason pay-per-click (PPC) marketing is so challenging is there isn’t one right answer to any problem.
When considering account structure, there’s an argument to be made for single keyword ad groups (SKAGs), single theme ad groups (STAGs), dynamic search ads (DSA) and Performance Max. This applies to bidding and creative as well.
However, if you rely too heavily on Google Ads strategies that seemed successful early in your PPC career, you might develop biases that limit your ability to tailor your approach to each client’s unique needs.
This article will break down:
- Major biases a strategist might have.
- When these biases are justified and when to re-examine them.
- How to safely test your assumptions without compromising your client’s profitability.
Bias 1: Smart vs. manual bidding
There are two major schools of thought on bidding, and both have pros and cons.
Some believe manual bidding is outdated and all bidding should be automatic or smart (conversion based). Others believe smart and auto bidding perform poorly due to data limitations and potential for disproportionate bid to budget ratios. These biases often come from when someone started in digital marketing.
For example, people who started in digital marketing between the early 2000s and 2010s often prefer manual bidding because it was the main approach back then. It requires a deep understanding of the auction process and taking full responsibility for identifying which signals are important to the brand.
Conversely, those who entered the field closer to 2020 may lean toward auto and smart bidding. Ad platforms heavily promote them and don’t require as much manual intervention.
Manual bidding is typically favored by those who distrust ad platforms and prefer control, while smart bidding appeals to those who dislike micromanaging accounts and prefer efficiency. Both methods can be valid, depending on the context:
Low-volume accounts
- Manual bidding may be necessary due to insufficient data to support smart bidding. However, bidding strategies like max clicks with a bid cap can help unlock volume.
- Based on Optmyzr data (25,000 accounts reviewed), Max clicks with and without a bid cap beat manual on CPC, conversion rate and CPA, though manual did better on ROAS.
High-volume accounts
- Relying exclusively on manual bidding can be unwise, as it ignores the benefits of smart bidding signals. When you can consistently get at least 60 conversions in a 30-day period, smart bidding does outperform manual bidding.
- Optmyzr data found that across 25,000 accounts, Max Conversion Value beat Manual bidding by 400%. Manual beat out max conversions pretty handily, which is why many might be biased against smart bidding (max conversions have 30% higher adoption than max conversion value).
Testing manual vs. smart bidding
To test these bidding strategies, you need to be able to control variables and have a risk-tolerant campaign.
For local businesses, this can be straightforward. Simply target different locations and compare performance. For single service or product-focused accounts, choose a part of the market where some fluctuation is acceptable.
Remember that automatic or smart bidding requires a learning period of at least five days, potentially extending to 14 days. During this time, avoid making significant changes to prevent fluctuations. However, you can adjust bid floors and caps without triggering a new learning period.
If you are testing manual bidding, be prepared to make precise bid adjustments, considering audiences, devices, locations and times of day. Decide whether to adopt aggressive or conservative cost-per-click (CPC) bids and adjust accordingly.
For example, if you decide to go conservative on the bid, you might have a bid of $3 and bid adjustments of ~50%. An aggressive bid of $5 might warrant 10%–15% bid adjustments. Remember that bid adjustments are cumulative and can be positive (direct budget toward something) or negative (direct budget away from something).
There is no definitive answer to the manual vs. smart bidding debate. The key is supporting your chosen strategy and communicating your decisions clearly with your client.
Bias 2: Performance Max as a branded cannibal
Performance Max campaigns have garnered mixed reactions due to their focus on visual content and initial lack of control over certain elements.
Initially, these campaigns often drove branded queries, sparking debates about their true value. However, Performance Max has evolved, offering tools such as:
- Asset and asset group level data
- YouTube placements for potential account-level exclusions
- Campaign-level exclusions for placements, topics and negative keywords
- Generative AI tools for brand safety guidelines
- Portfolio bidding with bid caps and floors via Search Ads 360
Those who struggle with Performance Max often excel in search-first marketing, while Performance Max is designed to allocate budgets based on customer presence and budget availability.
If visual content dominates your budget distribution, it may indicate a visual preference among your audience or budget constraints affecting search bids.
Key considerations for Performance Max
- Conversion volume: Can you achieve at least 60 conversions in a 30-day period? If not, either avoid Performance Max or allow branded traffic within the campaign and turn off stand-alone branded campaigns.
- Account structure: Decide whether multiple Performance Max campaigns with different location targets and budgets or a single campaign with multiple asset groups better suits your needs.
- Objective alignment: Ensure Performance Max campaigns focus on driving leads and sales, not top-of-funnel awareness or remarketing.
When testing Performance Max, ensuring you have enough budget for the campaign (minimum 10% of the budget) is important. If you are borrowing a budget from existing campaigns, make sure you still honor bid-to-budget ratios.
Bias 3: Keyword structure and the future of keywords
Keywords have evolved from rigid syntax-oriented elements to signals guiding the system.
Despite this, biases around keyword structures persist, whether favoring single keyword ad groups, dynamic search ads or theme-oriented structures.
Single keyword ad groups
The basic premise of a SKAG is that you have one keyword in each ad group so you can benefit from a “perfect” keyword-to-ad-to-landing page relationship. This can either mean lots of ad groups per campaign or lots of campaigns with a single ad group with one keyword.
These are powerful when used moderately and supported by sufficient budgets and aggressive negative keywords. However, they may struggle with low volumes. Also, if you’re not able to be surgical with your negatives, it’s very easy to make accidental duplicates.
SKAGs do best when you know exactly how your people will search and want to allocate a very specific budget to those ideas. However, be careful not to include too many ad groups in the same campaign or too many campaigns.
The former will cause some ad groups to miss out on impressions due to which get initial impressions/conversions, while the latter will cause data threshold issues.
Keyword match types
Broad match keywords have long since transitioned from syntax matching to intent matching. Yet even phrase and exact have close variants baked in, leading to divided opinions on how best to utilize keywords.
Testing broad match keywords in single keyword ad group can be effective, so long as you add all other keywords as a negative. Conversely, match-type specific campaigns can lead to accidental duplicates due to how close variants work (namely, that broad and phrase can lead to an exact match or exact close variant).
While exact match might consistently “perform better” than broad, it’s not really fair to say they have the same job. Consider the roles assigned to each entity within your account.
- For transactional goals, you might lean toward non-broad keywords (minimum 3+ words in the keyword phrase), exact match single theme ad groups, or dynamic search ads with extensive negatives.
- For data acquisition or ramping up an account, broader keywords and concepts may be more effective.
Addressing platform intent bias
We couldn’t discuss bias in PPC without addressing many practitioners’ bias against advice and updates from the ad platforms themselves.
Between brands not wanting to part with profit data (even though it will improve results and reporting) and taking any action as an overstep (even if it’s as innocent as pausing keywords with no data over the past 13 months), it’s hard to see a way for brands and ad platforms to rebuild trust.
A big source of this mistrust is if someone learns one network, they may struggle to adapt to the rules of another network.
For example, most paid search networks function at the campaign level, while most paid social function at the ad set level. Ad networks like Google favor old entities, while Meta favors newer ones.
All of these mechanics end up creating biases around which channels are best suited for a brand and whether the channel will actually be a good partner.
While this bias isn’t conquerable like the other biases, we all must remember that humans work on the product teams at all ad networks, and they thrive on specific constructive feedback.
If you’re going to test a network, make sure that you budget enough for a realistic test (time and money) and that you’re upfront with your stakeholders on what kind of reporting you can expect.
Conclusion
Biases are an inherent part of human nature, and while we can’t eliminate them, we can identify and counteract them through objective testing. Choose one test to run as you approach the fourth quarter to challenge your biases and validate or refine your strategies.
Embrace the flexibility of both smart and manual bidding, understand the potential of Performance Max campaigns and structure your keywords to maximize relevance and performance.
This approach will help you keep your PPC campaigns adaptable and effective, benefiting your clients and helping them achieve their business goals.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.
Source link : Searchengineland.com