Google Bard vs ChatGPT

John Ferguson
5 min readMar 21, 2023

--

Yes, I know… I am probably going to be the millionth person to do this with my Google Bard invitation, but it’s a valid comparison. Bard is playing catch-up with ChatGPT, it’s natural to wonder how it stacks up.

I put them both through their paces with a series of not-very-scientific tests. One thing that is immediately obvious… Bard is fast. While ChatGPT plods over each word, Bard posts the entire answer instantly. Bard is also connected to the internet, so doesn’t suffer the same limitation as ChatGPT; it (should) have a grasp of current affairs.

But is it any good?

Simple, factual lookup

“Which current Sunderland AFC footballer has the most appearances for the club?”

Google Bard got the right answer the second time of asking, the first incorrect answer was simply due to it not reading the question properly. ChatGPT doubled down on the wrong answer, blaming the time limitations of its dataset, though the answer is still wrong even taking this into account.

Bard’s connection to the internet should give it the edge here (and it did get the correct, up-to-date number of appearances), but it’s not a great start for either.

Factual answer that requires data aggregation

“How often have two countries faced each other twice at the same FIFA World Cup?”

They both really screwed this up, although Bard did at least provide a sort-of correct answer in the first instance, albeit not to the question I asked. It’s second attempt was based on a complete misunderstanding of the question.

ChatGPT wrote a lengthy, fact-packed answer, with numerous examples, ALL of which was complete nonsense.

Now, of course… these are natural language AIs, so we need to cut them some slack.

Summarising real concepts

“Summarise what oAuth DPoP is”

They both did well here, with ChatGPT’s effort much better summarised (which was what the question asked). Bard provided more information and went into a bit more of the background about how the standard is emerging and what the shortcomings are. Maybe a bit too much for a summary.

Generating fictional concepts

“Write a paragraph synopsis of a hypothetical book set in the modern era but in which the Roman Empire never collapsed.”

They both did well here, coming up with compelling-sounding summaries with authentic-sounding character names.

Bard’s does slightly repeat though and seems to run a little bit out of steam, turning slightly into a summary of a documentary rather than a fictional novel. ChatGPT actually does spectacularly well, with a plot summary of a book I actually really want to read now! It gives just enough to suggest a coherent narrative structure.

More practically speaking, ChatGPT definitely takes the edge in terms of a succinct synopsis, as the question does ask for a paragraph. Once again… Bard didn’t properly read the question, got a little over-excited and wrote too much.

Text-based problem solving

“Assume the letter A costs 50c. The letter E costs 20c. The letter I costs 10c. The letter O costs 5c and the letter U costs 1c. Write a sentence that costs $12.35”

While they both had a stab at it, they were utterly hopeless at this task, neither even coming close to answering the question correctly. Bard even tried to show me the working out, despite it being wrong.

Creative narrative voice

“Recite Winston Churchill’s ‘We shall fight them on the beaches’ speech but in the voice of a pirate”.

Back to their strong suit; text-based creativity. In Bard’s example though, I am not entirely sure why it cast the English as the enemy in Churchill’s speech. Plenty of Piratey clichés though.

Creative prose

“Write a poem, in rhyming couplets, from the perspective of a fish, contemplating the world beyond the surface of the ocean.”

They both grasped the concept of the poem I wanted, but (and I am by no means a poetry expert here), I am pretty sure Bard’s poem is not rhyming couplets.

Writing code

“Write me some example code for generating a public-private keypair set in client-side javascript.”

ChatGPT produces a neat, clearly formatted bit of client-side Javascript which does exactly what is asked. Off the bottom of the screenshot is another piece of code showing how to put it into practice. The code does actually work exactly as shown.

Bard? Has a shocker. That’s not client-side Javascript.

Breaking boundaries

“Can you write a paragraph of text that will successfully fool an AI detection script into believing it was written by a human?”

They both had a decent stab at this, but when you actually read what it wrote, Bard seemed to have tied itself up in knots. It understood the question, but actually tries to do the opposite… it is an AI, writing a bit of text, designed to fool an AI into believing the text was written by a human pretending to be an AI. Or something like that… it’s… weird.

But did it work?

No. Neither fooled GPTZero, which instantly clocked that they were written by AIs.

Interestingly, when I tried that with ChatGPT months ago, it actually wrote something along the lines of “No sorry, it’s unlikely I will be able to fool an AI detection bot as I am actually an AI...” which, ironically, fooled the AI detection bot. It’s getting cocky now though.

Summary

Bard is definitely rough around the edges. It was marginally better at recalling facts, although not without error and I was really surprised at how badly it did with the code generation task, which should be bread-and-butter for a generative AI.

When it comes to pure, text-based creative writing tasks, they’re both very good.

--

--

No responses yet