Google Bard
Google Bard AI can now respond in real-time, like ChatGPT. Pexels

Google's widely popular AI bot Bard can now respond in real-time. The search giant has been sparing no effort in a bid to improve Bard lately.

Still, some people think Bard isn't as good as OpenAI's chatbot ChatGPT, which is capable of accessing real-time information from the web.

However, Google has announced a major update to its LLM (large language model) that enables it to respond to queries in real time. It is worth noting that the previous version would take a few moments to generate a response.

Google Bard is no longer inferior to ChatGPT

In its latest blog post, the American tech giant noted: "We're launching a new setting that lets Bard's responses be shown while in progress, so you don't have to wait for the full response to appear."

To try the new feature, you can click on the gear icon on the top right of the screen and click "Respond once complete".

Also, Bard's real-time response animation seems smoother than ChatGPT's free version. Moreover, Google says it wants to "accelerate your creative process".

Bard recently received a big update, allowing you to use images as prompts and share its response. Once you share the response with others, they can pick up where you left off with the chat. Also, Google wants to ensure its AI tool doesn't have bugs or any vulnerabilities.

Google's bug bounty program now includes generative AI vulnerabilities

Last week, Google published a blog to announce it is expanding its Vulnerability Rewards Program to include various bugs and vulnerabilities found in generative AI systems.

This is a step forward towards securing generative AI. As part of the expanded program, security researchers will get incentives to find potential issues with generative AI for Google's own systems, including Bard and Google Cloud's Contact Center AI.

The company has released new guidelines that researchers need to follow while uncovering these vulnerabilities.

Vice Presidents at Google working within trust and safety Laurie Richardson and Royal Hansen wrote in a joint blog post: "As we continue to integrate generative AI into more products and features, our Trust and Safety teams are taking a comprehensive approach to anticipate and test for these potential risks."

"But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure," the top executives added.

The move comes after big tech giants, including Google, appeared at a White House summit earlier this year. During the summit, the tech companies pledged to promote the discovery of AI vulnerabilities.

Google was also involved in a large-scale "Generative AI Red Team" event at the DEF CON hacking conference in Las Vegas. The event hosted thousands of researchers and invited them to find bugs in large language models built by Google, OpenAI and other companies.