Bing AI Also Made Mistakes in Demonstration

46

A failed demonstration of Google’s new AI Bard led to much criticism last week, but Bing, powered by ChatGPT, is also coming up with answers. This should be apparent from a demonstration by Microsoft and tests by reviewers.

 

Search engine Bing gets ChatGPT’s language algorithm built-in. As a result, the highly hyped chatbot can write readable texts and process a lot of information, making searching for answers much more accessible. The only problem is that the chatbot makes mistakes. This should become apparent now that Microsoft has opened up the new search engine to researchers and reviewers.

In one of Bing’s first demos, the chatbot is said to have made up things, from facts about vacuum cleaners and Mexico to false financial information about companies. Researcher Dmitri Brereton notes this in his blog. One of the examples he gives is a vacuum cleaner for pet owners.

In a demonstration, Bing writes an article about the pros and cons of the ‘Bissel Pet Hair Eraser Handheld Vacuum’. And Bing has disadvantages such as a short 40 cm cord, a lot of noise and little suction power. However, none of those drawbacks can be found in Bing’s articles. The device appears to be wireless, and after some additional research, many reviews indicate how quietly the device works.

With more and more people using Bing AI, so are more errors. For example, users indicate on Reddit that the chatbot seems convinced that we are still living in the year 2022 or that Croatia is no longer a member of the European Union. It’s an exciting evolution, given the competitive backlash Bard got last week.

Unfortunately, Google’s chat AI, which was introduced last week, also makes mistakes in demonstrations. As a result, it cost the company a hefty bite out of the share value.

Leave A Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.