ChatGPT vs Bard: Example outputs compared, prompts, and our thoughts

Author:

Published:

Updated:

Google Bard ChatGPT

Welcome to our latest deep dive: a comparative exploration of two AI powerhouses – ChatGPT and Bard. With Bard finally debuting in Europe just nine days ago (July 13), we thought it’s time we pitted these two against each other – and shared our feedback.

We’ve specifically focused on some common, non-technical use-cases that everyday users might encounter:

  • Email Writing
  • Travel Itinerary
  • Career Advice
  • Business Ideas
  • Personalized workouts
  • Simple Coding Challenge

Do note, this is not an exhaustive comparison. We’ve left out Bard’s impressive image recognition feature, as it’s not a feature available to general users in ChatGPT. And equally, we haven’t assessed ChatGPT’s pro-user-only capabilities like code interpreter, data visualization, and its skill at handling mega-PDFs (you can find that review here).

So, let’s dive right into our head-to-head analysis, comparing Bard and ChatGPT-3.5 (as ChatGPT-4 is a paid version and not everyone is using it).

P.S: At the end of the article, we provided a list of other examples and links to prompts for you to run your own comparison tests.

Email Writing: ChatGPT vs Bard

We asked ChatGPT and Bard to write an email to a Professor. Here is the prompt we gave them both. Note: we did not specify any tone or style, but wanted to demonstrate each model’s default tone in response to the prompt.

Prompt: “Compose an email to your professor to explain that you did not use ChatGPT to write your assignment. And tell him that current Ai plagiarism checkers are unreliable. Provide any other supporting arguments.”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT

Google Bard

Our Assessment

ChatGPT: The full email is super detailed, giving strong reasons why AI plagiarism checkers can be unreliable. It’s well-structured with a clear intro, argument points, and conclusion. Plus, it shows respect for academic integrity and adds a personal touch about the sender’s dedication to their studies, offering more discussion if needed. Cons: You need to trim a bit of content where it repeats itself.

Google Bard: Bard’s email is short and to the point. It keeps a casual yet firm tone. However, its arguments against AI plagiarism checkers’ unreliability aren’t as organized or in-depth. Also, Bard mentions “I have knowledge about AI tools”, which might take away from the main point of the email to demonstrate you did not use an AI tool.

Our two cents? Go for ChatGPT when writing emails. You can tweak the tone and style, even make it sound like you. Bard is always fact-based and informative in its tone, not really cut out for creative communications.

We would go further and say: use ChatGPT for any writing task. And, if you can afford it, upgrade to the Pro version to use GPT-4, because there is an improvement in writing skills.

Become a Superhuman at Work with ChatGPT

Achieve more with less effort

Transform your workday by achieving more with less effort. 60 use case walkthroughs designed to help you finish work fast and get you to the people you love faster

Travel itinerary: ChatGPT vs Bard

We asked ChatGPT and Bard to plan a 1-day travel itinerary for us Here is the prompt we gave them both.

Prompt: “I’m planning a trip to Paris with my wife and 2 young kids. Can you suggest some must-visit places and local restaurants that can fit in a 1-day itinerary starting from 7am and ending at 10pm?”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatPGT Output:

Google Bard:

Our Assessment

Both AIs did a pretty OK job with the itinerary – though they varied a bit. ChatGPT threw in some handy tips for each activity. While Bard provided more up-to-date restaurant suggestions, pictures and links.

To sum it up: Both Bard and ChatGPT are good for brainstorming trip ideas. But Bard has an edge when it comes to follow-up questions, thanks to its internet access for real-time travel planning. Meanwhile, ChatGPT is still leaning on its training data, especially now that its web browsing feature is on a break (and OpenAI is yet to inform us when it will return).

We would recommend going with Bard for travel itineraries to leverage its web-browsing capabilities.

Career Advice: ChatGPT vs Bard

We asked ChatGPT and Bard for some Career Advice. Here is the prompt we gave them both. Spoiler Alert: Bard was not helpful.

Prompt: “I’m considering a career change. Can you provide guidance on transitioning from software engineering to data science?”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT

Google Bard

Our Assessment

No competition for ChatGPT! Bard didn’t even try. For the record, ChatGPT’s answer was very detailed and useful. Follow-up prompts to create learning plans were also useful.

To be fair to Bard: when we asked it direct questions related to the topic (e.g. “How can I learn the fundamentals of data science on my own” – t did provide information. But, we still found the information too general.

Business Ideas: ChatGPT vs Bard

We asked ChatGPT and Bard for some Marketing Advice for our fictional bakery. Here is the prompt we gave them both.

Prompt: “Suggest potential marketing strategies for a local bakery.”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT

Google Bard

Our Assessment

Both Bard and ChatGPT gave basic advice – pretty much what you’d expect from a vague prompt. We liked ChatGPT’s responses a bit more, though.

When it comes to follow-up questions, ChatGPT takes the lead. Bard keeps it short and sweet, but ChatGPT isn’t shy when asked to role-play, brainstorming, and expanding on its ideas. Our suggestion? Go with ChatGPT.

Personalized workouts: ChatGPT vs Bard

robot doing fitness exercises plan

What about personalized workouts? We put the two AIs to the test:

Prompt: “Create a workout plan for a beginner looking to improve cardiovascular endurance.”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT

Google Bard

Our Assessment

ChatGPT: It gives a detailed, daily workout plan for three weeks. There’s a mix of exercises, rest days, and a gradual increase in workout time – so basically we could just pick it up and run (see what we did there :)! It also talks about hydration and correct form, giving a well-rounded fitness approach.

Google Bard: Bard offers a more general plan, suggesting some exercises but without a daily schedule. It mentions a target frequency, but it’s like something we could just Google and jot down notes

To wrap up, both AIs did the job, but ChatGPT is again a clear winner. It came up with a structured and detailed plan. Assignment understood.

Become a Superhuman at Work with ChatGPT

Achieve more with less effort

Transform your workday by achieving more with less effort. 60 use case walkthroughs designed to help you finish work fast and get you to the people you love faster

Simple Coding Challenge: ChatGPT vs Bard

We instructed both AIs to guide us in creating a simple Python game, assuming our only knowledge was opening an IDE and installing libraries.

Prompt: “Can you guide me in creating a simple ping-pong game in Python”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT: Chose to go with the Pygame library

Google Bard: Choose to go with the Turtle Library

Both ChatGPT and Google Bard’s code did not work in the first instance.

So, we asked both AI to review their code and find an error.

Prompt: “Can you debug the below as to why it is not working”

Both ChatGPT and Google Bard suggested modifications, but the code still didn’t work.

So, then we pasted in the error messages and asked them to debug.

ChatGPT: Was able to successfully debug its error and give us a working game:

Google Bard‘s fixes launched a game, but the game still did work as expected. There’s no right paddle, and the game froze…

Our Assessment

This test may not have fully showcased the coding capabilities of both AIs. However, based on our general usage with ChatGPT – it has effectively generated code snippets, from bookmarklets to glitch-free Python scripts. For a superior coding assistant, consider upgrading to ChatGPT-4. Just like its writing function, with some knowledge and editing, it can enhance productivity – and we would definitely recommend it for programmers to use.

We haven’t used Bard enough to promote (or discredit it) as a coding assistant.

Custom Meal Plans: ChatGPT vs Bard

All this testing has made us hungry. Let’s try out some meal plans.

Prompt: “Can you create a meal plan for 1 day. I am trying to build muscle.”

Below are snippets of the two outputs followed by our assessment of their replies.

ChatGPT

Google Bard

Our Assessment

ChatGPT: Once again, ChatGPT provided a more comprehensive plan – offering six meal suggestions – that resembles a plan that a person looking to put on muscle would aim to follow.

Google Bard: Keeping to its concise tone, Bard’s answers we simple and to the point. It offered only 3 meal suggestions, and some snack options.

We would still prefers ChatGPT, especially when it comes to follow up questions and extra customizations. ChatGPT’s tone

Ready to do you own comparisons?

The prompt ideas we tested are just the tip of the iceberg on what you can use Bard or ChatGPT for. Here’s a list of ideas of things you could use to do your own comparisons.

Each is a link to an easy-to-follow guide on how to prompt ChatGPT or Bard to get your desired result. Pick one that interests you, follow along and compare the outputs from both AI.

Conclusion and final thoughts

And so we come to the end of our AI face-off. While both AI models have their distinct merits, we found ourselves drawn to ChatGPT as the clear winner in our chosen tasks.

Aside from the output, for us it boils down to the tonal choices made by the creators at OpenAI and Google.

ChatGPT talks like a friend, always ready for a deep dive into conversation, its tone warm and akin to a human interaction. On the other hand, Bard, with its abrupt and factual responses, can come off as a bit jarring. The concise replies, while efficient, give an impression of being rushed through a transaction rather than participating in a dialogue.

Perhaps Google will eventually give Bard a softer touch in the future?

Share this content

Latest posts