Yesterday, Google began to slowly roll out Google Bard to some initial reporters and also Google opened up a waitlist to users in the US and UK. I personally gained access to Bard after writing most of this article but I did gain access to Bard yesterday at 1pm ET. But you can sign up for the waitlist at bard.google.com (it does not yet work with Google WorkSpace accounts).
Below you will find out more information on how Bard looks, how it works, how the citations/sources work, limitations, early impressions and more. There is a lot here – and it is super early.
My early impressions is that Google is clearly positioning Bard to be very different from Google Search. In addition, Google is also making sure Bard feels and works differently than Bing Chat. Bing Chat, to me, feels way more thought out in terms of the user experience and all the tiny details in how it works with Bing Search. Google is making it super clear right now that Bard is not Search and only putting a “Google It” button in the Bard results so that you are taken out of Bard and into Search.
Bard does not do a lot of what Bing Chat and ChatGPT does but Bard is way faster. Bard has no ads, Bing Chat does have ads. Bard rarely show citations/links, Bing Chat shows citations and links in a much more prominent way. Bard and Bing Chat are just very different, while being similar in purpose.
Bard is Google’s experimental conversational AI service, powered by LaMDA, where Google can answer questions that might not have one right answer. Google said, “Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable ****** over time. It’s grounded in Google’s understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.” In short, it will get better over time, so don’t be too harsh on Google…
As a reminder, Google said Bard is not Search we have quotes from Google’s Bard lead who said, “It’s an experiment that’s a collaborative AI service that we talked about,” Krawczyk said. “The magic that we’re finding in using the product is really around being this creative companion to helping you be the sparkplug for imagination, explore your curiosity, etc.” But he added, “we can’t stop users from trying to use it like search.”
What Bard Looks Like
Here are some screenshots and videos of Bard from Google:
Citations/Links to Publishers Are Hard To Come By
It is hard to find sources, citations and links in Google Bard but they do come up if you try hard enough. Gary Illyes from Google did say publishers will get traffic from Bard, maybe that is what he meant when we see the “Google it” button in Bard?
I asked Areej Abuali and Billie Geena – who has access already, and she was unable to find Bard giving any links to any source. But some, including in my tests, were able to see some sources and links. Google Bard also has a “Google it” button…
Nope, zero citations in all prompts I’ve done so far
— Areej (@areej_abuali) March 21, 2023
No I can’t see any – I can take a screenrecord for you
— Billie 🦕 (@BillieGeena) March 21, 2023
Actually got a short answer for asking about search engine roundtable – here is all the information it currently provides. pic.twitter.com/332tgKQ3c4
— Billie 🦕 (@BillieGeena) March 21, 2023
If you really work hard to get a citation, you kind of do but not link:
No citations, just a link to “Google it”. Hilarious answer when asked point blank about its sources. pic.twitter.com/98lM4HbB6X
— Simon Lesser (@simonlesser) March 21, 2023
And sometimes not:
I have seen it source the books I asked it for, but the Source was just a Google search for Google Analytics Books pic.twitter.com/SVneA6BP2d
— Marc Agate (@MJAgate) March 21, 2023
When I gained access, I asked Bard why exactly it refused to give sources and citations, it said:
I cannot show you the URL of where I came up with this answer because I did not come up with this answer from a specific URL. I am trained on a massive dataset of text and code, including the Google Search index. This allows me to access and process information from the real world through Google Search and keep my response consistent with search results.
Why Google Bard cannot cite sources, give citations or link to content creators/publishers pic.twitter.com/3hMjKNkEsb
— Barry Schwartz (@rustybrick) March 21, 2023
Oh wait, maybe you can force a link if you try hard enough and specific enough but this is not good enough:
Source link at the bottom of the query:
What is iPullRank? pic.twitter.com/yy1rItzItb
— Garrett Sussman ☕️🔎 (@garrettsussman) March 21, 2023
I am starting to see sources listed now pic.twitter.com/DX1x2g2UCV
— Barry Schwartz (@rustybrick) March 21, 2023
Just not the best experience:
Well, this would suck for me if people used Bard to search for this query. No citations, but when you click “Google it”, Bard provides a link to a fresh SERP where I have the featured snippet. I still can’t believe there aren’t more citations… pic.twitter.com/ho37Dhunjm
— Glenn Gabe (@glenngabe) March 21, 2023
Here is why Google Bard is less likely to provide citations, “Bard is trained on a massive dataset of text and code, and it can be difficult to determine which sources were used to generate a particular answer.”
Why Google Bard is often not going to link or source or provide citations… pic.twitter.com/oP6MWYwc8u
— Barry Schwartz (@rustybrick) March 21, 2023
Early Impressions
The folks at The Verge played with Bard in a limited way and they said:
In a demo for The Verge, Bard was able to quickly and fluidly answer a number of general queries, offering anodyne advice on how to encourage a child to take up bowling (“take them to a bowling alley”) and recommending a list of popular heist movies (including The Italian Job, The Score, and Heist). Bard generates three responses to each user query, though the variation in their content is minimal, and underneath each reply is a prominent “Google It” button that redirects users to a related Google search.
Bard’s interface is festooned with disclaimers to treat its replies with caution
As with ChatGPT and Bing, there’s also a prominent disclaimer underneath the main text box warning users that “Bard may display inaccurate or offensive information that doesn’t represent Google’s views” — the AI equivalent of “abandon trust, all ye who type here.”
As expected, then, trying to extract factual information from Bard is hit-and-miss. Although the chatbot is connected to Google’s search results, it couldn’t fully answer a query on who gave the day’s White House press briefing (it correctly identified the press secretary as Karine Jean-Pierre but didn’t note that the cast of Ted Lasso was also present). It was also unable to correctly answer a tricky question about the maximum load capacity of a specific washing machine, instead inventing three different but incorrect answers. Repeating the query did retrieve the correct information, but users would be unable to know which was which without checking an authoritative source like the machine’s manual.
Billie Geena gained access to Bard right away, here are some of her tweets:
I got early access to Bard so the first thing I had to do is ask about myself
And ok this is exciting pic.twitter.com/qAmR0rExdO
— Billie 🦕 (@BillieGeena) March 21, 2023
I’m finding playing with this really exciting – however it does now cite it’s sources. But it’s very easy to switch your question into a Google search
— Billie 🦕 (@BillieGeena) March 21, 2023
Areej Abuali said OpenAI’s ChatGPT beats Google Bard in her early tests:
Okay, I spent 5 minutes on Bard and ChatGPT clearly wins – no thread, no analysis, nothing, that’s it, that’s the tweet.
— Areej (@areej_abuali) March 21, 2023
Some more tweets in the wild:
Google Bard can’t write a function that adds two numbers pic.twitter.com/t1B1WHRPrr
— Jane Manchun Wong (@wongmjane) March 21, 2023
Oh hey, Google Bard. pic.twitter.com/w1ENWRObAM
— Lance Ulanoff (@LanceUlanoff) March 21, 2023
******* #Google #Bard pic.twitter.com/5pS7sIXtRH
— Justin Chen (@ch3njus) March 21, 2023
Well, ChatGPT and MidJourney don’t have anything to worry about, anytime soon. Ladies and Gentleman, I give you, Googles Bard! 😂🤦♂️ pic.twitter.com/KFSwvZV4GT
— Lee リー (@YodasMyDad) March 21, 2023
Hmm I take it back. GPT3.5 is still much better than Bard. @GoogleAI #bard #chatgpt
I asked Bard and ChatGPT-3.5 to derive time dilation. Bard doesn’t quite derive it whereas GPT-3.5 went into all the details, and got the answer correct. pic.twitter.com/tGXrHpQV55
— Ben Athiwaratkun (@ben_athi) March 21, 2023
Bard vs ChatGPT4, milk and a hat in a safe on a hill. pic.twitter.com/DXv2VDcKL3
— Andrew Riley (@_happyKC) March 21, 2023
Google bard can’t code or even hold context of previous conversations..
Guess ChatGPT still has no serious competition lol #bard #Google #ChatGPT #googlebard pic.twitter.com/i9PN7id3Uf
— Moe (@MoeX003) March 21, 2023
I guess Google is okay with buying links now? 🙂
— Dean Cruddace (@DeanCruddace) March 21, 2023
Straight from the source ya’ll (as if our Google search liaisons haven’t been saying this forever)#seo #bard pic.twitter.com/sHO0a75Uso
— Danielle Rohe (@d4ni_s) March 21, 2023
My very first use of Bard. What do we think?
Prompt:
“Create an analogy for search engine optimization based on the career of Allen Iverson” pic.twitter.com/SGeYipWRAA
— Garrett Sussman ☕️🔎 (@garrettsussman) March 21, 2023
Local:
I asked #BARD for the best breakfast place near where I live. Gave me three different lists with limited overlap. Interesting. pic.twitter.com/2MQgIYzTTE
— Greg Sterling 🇺🇦 (@gsterling) March 21, 2023
Asked for “handyman in 94118” and again got different lists (one is default) with some overlap. Then I “Googled it” and the results were completely different. None of the Local Pack results appear in the #BARD lists. There are also no URLs in the Bard lists. pic.twitter.com/WrNEeufoc3
— Greg Sterling 🇺🇦 (@gsterling) March 21, 2023
But Bard does not always get it right, like Google said:
But Bard gets it wrong, I never worked with Coca-Cola, Disney, Nike, Oracle, IBM etc – at least not that I know of pic.twitter.com/gOy4vdQbNj
— Barry Schwartz (@rustybrick) March 21, 2023
Can Bard tell you if your content meets EEAT?
interesting pic.twitter.com/q9CBDbcimL
— Barry Schwartz (@rustybrick) March 21, 2023
Run a health and medical site? You’re safe from Bard for now. 🙂 Bing Chat crushes Bard on this one (clearly)… Prompt: “What are the symptoms of strep throat?” Bard can’t answer (at least yet). Bing Chat with a strong answer + citations. Winner: Bing Chat pic.twitter.com/hDgIzjj3aW
— Glenn Gabe (@glenngabe) March 21, 2023
Here is a good comparison tweet:
Google Bard areas for improvement
⚠️ No coding capabilities 🚫👨💻
⚠️ Multi-language not at the level of competitors 🌍
⚠️ Fails on common understanding of the world 👶See below prompts and comparisons with Bing, GPT-4 and GPT-3.5. #google #bard #workinprogress pic.twitter.com/Bax9jNo6t5
— ᐸGerardSans/ᐳ🤣🇬🇧 (@gerardsans) March 21, 2023
And yes, Bard is a kiss up:
— Greg (@PPCGreg) March 21, 2023
The most important feature:
Most important feature of Google Bard for me… pic.twitter.com/40VNGSN2Hr
— Barry Schwartz (@rustybrick) March 21, 2023
I am looking forward to testing out Bard and letting you know what I find, until then, we wait. You can read the other coverage on this announcement on Techmeme.
Forum discussion at Twitter and WebmasterWorld.
Source link : Seroundtable.com