Google Doubles Down On AI Overviews – Ray AIO Update


Google’s Head of Search, Liz Reid, wrote a blog post last ***** named “AI Overviews: About last week.” She basically said that overall the vast majority of the AI Overviews are really good and they did find examples of where they can make improvements. But AI Overviews are here to stay and Google will continue to show them in Google Search.

As you remember, Google launched AI Overviews a couple of weeks ago in the US. Then over time, many started to see and share weird and embarrassing (sometimes harmful) examples of AI Overviews, which led to Google updating its help documentation and Google’s CEO going on defensive.

She said, “We found a content policy violation on less than one in every 7 million unique queries on which AI Overviews appeared.” “We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback,” she added.

Some are calling the improvements, the updates, made to these AI Overviews, the Ray Update. Mike King suggested the name on X, saying, “‘m gonna name the first algorithm update of the AIO era. We’re gonna call this one the “Ray Filter” or the “Ray update” named after Lily Ray.” Lily was instrumental at pushing Google to work harder on these AI Overviews by sharing countless examples of where they went wrong.

Here are some bullets on what Liz Reid said, I go a bit deeper on Search Engine Land and there is more coverage on Techmeme. I should note, this is what she said, not what I am saying:

  • Searchers like the AI Overviews and are engaging with them and the publishers referenced in them more
  • AI Overviews work very differently than chatbots and other LLM products
  • AI Overviews are integrated into core search and only show information that is backed up by top web results
  • AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might
  • When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.
  • AI Overviews accuracy rate is as good as featured snippets
  • Google said they’ve “seen nonsensical new searches, seemingly aimed at producing erroneous results.
  • There have been a large number of faked screenshots shared widely
  • “But some odd, inaccurate or unhelpful AI Overviews certainly did show up,” Google admitted
  • There can be “data voids” and “information gaps” where Google might cite pages it should not, like satire documents (like in the case of “How many rocks should I eat?”
  • In some cases Google said the AI Overviews misinterpret language on webpages and present inaccurate information

So now what will Google do to improve AI Overviews?

  • Google won’t individually fix each one that goes bad, it updates its ****** to improve what went wrong so it works for other queries too
  • Google built a better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.
  • Google updated its systems to limit the use of user-generated content in responses that could offer misleading advice.
  • Google added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
  • For topics like news and health, Google said it already have strong guardrails in place. For example, Google said it aims to not show AI Overviews for hard news topics, where freshness and factuality are important.
  • In the case of health, Google said it launched additional triggering refinements to enhance our quality protections.

Here are some posts on X on this:

Forum discussion at WebmasterWorld.





Source link : Seroundtable.com

Social media & sharing icons powered by UltimatelySocial
error

Enjoy Our Website? Please share :) Thank you!